PUP-21 Setting parameter values for the PIP-22 new parameter set

UPDATE 26/7/22: Modify and clarify the method by which Foundation will adjust ValidatorStakeWeightMultiplier. Complete Dissenting Opinion section.

UPDATE 6/7/22: Change the values of ValidatorStakeFloorMultiplier, ServicerStakeWeightCeiling to reflect being denominated in units of uPOKT rather than POKT to better support alignment with implementation.

Attributes

  • Author(s): @msa6867, @Andy-Liquify
  • Parameter: [ValidatorStakeFloorMultiplier, ValidatorStakeWeightMultiplier, ServicerStakeWeightCeiling, ValidatorStakeFloorMultiplierExponent]
  • Current Value: Undefined. Nominal values that could be used to implement PIP-22 code with no effect on rewards and therefore with no effect on system operation would be [15000000000, 1, 15000000000, 0]. This set may be taken as the de facto “current” values
  • **Related: PIP-22
  • New Value: [15000000000, variable see below, 60000000000, 1]

Summary

PIP-22 directs that code be added to the pocket network to enable stake-weighted rewards to be awarded to nodes for each relay serviced. To accomplish this code change, it identified four new parameters to be added, but deferred to this Parameter Update Proposal the amount and type of weighting to be put into effect; that is, how much node consolidation to allow and incentivize. Absent passage of this or other competing parameter update proposal, the default will be to set parameter values in such manner as to cause no change to the current reward structure.

The proposed action is to set these new parameters in a manner that accomplish a sizeable but conservative amount of node consolidation. Namely:
Set ServicerStakeWeightCeiling = 60000000000 to cap weighting reward multiplier at 4x
Set ValidatorStakeFloorMultiplierExponent to 1 so that reward weighting is linear for now.
Set ValidatorStakeFloorMultiplier = 15000000000 to bin reward multiplier by 15k POKT staked.

The above parameter set will cause:
nodes with 15000-29999 POKT will receive a base amount of reward per relay
nodes with 30000-44999 POKT will receive twice the base amount of reward per relay
nodes with 45000-59999 POKT will receive three times the base amount of reward per relay
nodes with 60000 or more POKT will receive four times the base amount of reward per relay

The goal is to accomplish the above reward-weighting structure without affecting the spirit and intention and timeline of PUP-11 and PUP-13. That is, the goal is for this change to be neither inflationary nor deflationary. To achieve this goal, the base amount of reward per relay after setting these three parameters as indicated above should be the same as the baseline amount of reward per relay that was in effect just prior to implementing these parameter values. Toward that end, the foundation will be directed to:
Set ValidatorStakeWeightMultiplier initially to 1, then adjust as necessary to be equal to the average bin multiplier (1x, 2x, 3x or 4x) experienced across the entire set of nodes, weighted per chain by the number of relays serviced by that chain. Note that this parameter may be set to a value greater than one as its initial setting if the above weighted average bin multiplier across all nodes can be discerned prior to the parameter switch date. See below for further details.

Abstract

PIP-22 adds four new DAO-controlled parameters. This proposal recommends values for these new parameters to allow the amount of POKT staked to a node to result in an up to 4x linear multiplier of rewards per relay. The four parameters are as follows:

ValidatorStakeFloorMultiplier - This corresponds to the bin widths, that is, the number of extra tokens required above the minimum in order to trigger a next-higher tier of weighting. The parameter name is carried over from the language of PIP-22 for the sake of continuity. In actual implementation it will likely take on a more descriptively appropriate name such as “ServicerStakeFloorBinSize”. The default is to set this equal to StakingMinimum, that is, to 15000000000 uPOKT (15000 POKT). The result is to quantize the multiplier by whole units (1x, 2x, 3x, etc). Setting it to a smaller value would allow finer bins. For example, setting it to 3000000000 would mean that 15000-17999 POKT staked would result in a base amount of reward per relay while 18000-20999 POKT staked would result in 1.2 times the base reward per relay etc. Proposal: set to 15000000000

ValidatorStakeWeightMultiplier - This is a divisor applied to the stake weight multiplier during the process of calculating reward in POKT from number of relays. Its intended purpose is to keep stake-weighted rewards from interfering with the intentions of WAGMI. Any setting of this parameter in a manner that does not comply with the above is contrary to WAGMI and should be avoided without careful deliberation and understanding of the consequences of such action. The parameter name is carried over from the language of PIP-22 for the sake of continuity. In actual implementation it will likely take on a more descriptively appropriate name such as “ServicerStakeWeightAdjustor. Proposal: adjust during the transition period so as to comply with PUP-11 and PUP-13. In practice, set as needed during the transition period to be equal to the average bin multiplier experienced across all nodes weighted per chain according to the number of relays serviced by on that chain.

For example, suppose after consolidation there were four nodes in the system staked with 22k, 37k, 50k and 100k POKT. The bin multiplier for the four nodes would be 1, 2, 3 and 4, respectively, Suppose further that the first three nodes ran Harmony and Polygon chains while the last ran Harmony only. Last, suppose volume on Harmony was 300M relays and on Polygon was 200M. As far as Harmony is concerned the average bin multiplier is 2.5 (average of 1, 2, 3 and 4) whereas as far as Polygon is concerned the average bin multiplier is 2.0 (average of 1, 2 and 3). Weighting the two chain averages together, the system average bin multiplier is 2.3 (3/5 * 2.5 + 2/5 * 2.0). Thus, Foundation would set ValidatorStakeWeightMultiplier to 2.3.

ServicerStakeWeightCeiling - This sets the upper limit for the amount of POKT that can feed into a stake-weighted reward multiplier. For example, setting it to 60000000000 uPOKT (60000 POKT), given linear weighting, would limit a node to 4 times the base amount, whether the node has 60000 or 1 million POKT staked. Setting it to 75000000000 would result in a maximum of 5 times the base amount (again assuming linear weighting). Setting this parameter does not affect how much POKT can be staked to a node, only how much of the staked amount can feed into the reward multiplier. The actual staked amount will often be more as nodes must stake enough to provide a reserve for slash events. In addition, nodes may stake substantially more that this ceiling in a bid to vie for one of the validator slots. Proposal: set to 60000000000.

ValidatorStakeFloorMultiplierExponent - Defines the linearity of the stake-weighting curve, with 1 being linear and <1 non-linear. Valid range is 0 to 1. A value of 0.5 would apply a square root to the weighting multiplier. For example, a node staked with 60000 POKT would receive only two times the base reward per relay even though it staked 4 times as much as the minimum (since the square root of 4 is 2). A node staked with 135000 POKT would (assuming the ceiling was raised this high) receive only three times the base reward per relay even though it staked 9 times as much as the minimum (since the square root of 9 is 3). In essence this creates a principle of “diminishing returns” where the more that is staked the less added benefit one gets for each extra 15k POKT added. A value larger than 0.5 would introduce a smaller “diminishing returns” principle. A value of less than 0.5 would introduce a more heavy-handed “diminishing returns” principle. A value of 0 would effectively turn off stake-weighted rewards since all nodes would get the base reward per relay no matter how much reward is staked. Proposal: set = 1, that is make the weighting linear (i.e., 4x the amount staked results in 4x the reward multiplier etc.) and defer until some future PUP to explore other nonlinear options once linear weighting is studied and understood.

Note that there may be implementation-specific flags that the development team may use to facilitate pushing the PIP-22 code updates to portal and nodes while keeping system behavior unchanged until the specified “effective date”. PIP-22 identified two possible such flags, RSCAL and VEDIT. These flags are not DAO-controlled parameters and therefore fall outside the scope of this protocols.

Motivation

Motivation 1: Do no harm. To that end we have specified that the foundation adjust ValidatorStakeWeightMultiplier as needed during the transition period to keep the transition from being inflationary or deflationary (that is to abide by WAGMI principles).

Motivation 2: Keep it Simple. To that end we start with linear weighting and defer to the future exploring nonlinear weighting

Motivation 3: Avoid unrealistic optimism. Keep enough pain from infra-costs in the minds of node runners to encourage a continual press toward more cost-effective solutions and to discourage a flood of new nodes from entering the space via easy-to-stake but overly-priced service providers, leading eventually to downward pressure on price. To that end we propose 60000 POKT ceiling rather than something higher.

Motivation 4: Minimize number of system transitions. To that end we propose 60000 POKT ceiling immediately rather than ratcheting up the system from 15k to 30k to 45k etc over many months.

Motivation 5: Err on the side of too small and fall forward than too big and fall back. TO that end we propose 60000000000 rather than 75000000000 both of which were perfectly good options that fit all the other motivations… but the tip going to 60000000000 all else being equal because it will be easier in the future to move from 60000000000 to 75000000000, than it would be to move down from 75000000000 to 60000000000. (That being said, if the community expresses strong support for 75000000000 rather than 60000000000, we are not opposed to that value.)

Motivation 6: Avoid unnecessary churn. To that end we propose bin size of 15000 POKT instead of something smaller. If bin size were set, for example, to 1000 POKT; node runners would likely add tokens every time they accumulated 1000 POKT worth or rewards in order to maximize a “compounding” effect. Setting bin size higher discourages this behavior and thus reduces the number of add stake events. Other than this, this parameter has very little overall effect on system behavior.

Rationale

This section contains rationale only for assertions (whether explicit or implied) that are deemed to not be self -evident. This section can be expanded if needed.

Assertion 1: Setting ServicerStakeWeightCeiling too high can lead to unrealistic optimism, inhibit innovation and cost-reduction efforts.

Consider, by way of example POKT price of $0.10, and two servicers:

Servicer 1 is inefficient but easy to onboard. Reward averages 30 POKT/day and infra cost of $6/day
Servicer 2 is harder to onboard but worked hard to cut infra costs: reward averages 30 POKT/day and infra costs $2/day

Prior to turning on stake-weighted rewards:
Servicer 1: node runners are bleeding $3/day
Servicer 2: node runners are earning net 10 POKT per day

Turning on a 4x cap on stake-weighted rewards:
Servicer 1: node runners earn net 60 POKT per day (15 POKT per 15k staked)
Servicer 2: node runners earn net 100 POKT per day (25 POKT per 15k staked)

Nodes on Servicer 1 are at least happy that they are no longer bleeding. But the grass is definitely greener for node runners at Servicer 2 and Servicer 2 will grow market share at Servicer 1’s expense thus incentivizing Servicers to innovate and reduce costs.

Compare this to 20x cap on stake-weighted rewards
Servicer 1: node runners earn net 540 POKT per day (27 POKT per 15k staked)
Servicer 2: node runners earn net 580 POKT per day (29 POKT per 15k staked)

The difference is not big enough to induce people to switch, and the more friendly onboarding process of Servicer 1 becomes the deciding factor and means that Service 1 maintains or even grows market compared to Servicer 2., thus inhibiting innovation and cost reduction efforts.

Assertion 2: Setting ValidatorStakeWeightMultiplier during the transition period to be equal to the observed average multiplier experienced across all nodes prior to applying this adjustment factor will cause the node consolidation that results from this proposal to be neither inflationary nor deflationary and conform to the principles of PUP-11 and PUP-13.

Under PIP-22, the new node rewards are a node specific multiplier of the pre-PIP-22 rewards divided by a system-wide constant:
new.node.reward = old.node.reward * node.multiplier / adjustor

Where node.multiplier depends on the amount staked plus the three parameters cap, bin size and exponent, and where the adjustor is the ValidatorStakeWeightMultiplier

On the average:
average.new.node.reward = average.old.node.reward * average.node.multiplier / adjustor

By setting adjustor = average.node.multiplier we get:
average.new.node.reward = average.old.node.reward

Assertion 3: Adjusting ValidatorStakeWeightMultiplier periodically during the transition period will not place an undue burden on the implementors.

First, calculating average.node.multiplier is super easy and can be automated. The DAO maintains a list of all nodes and that list has daily updates of amount staked. The node multiplier is a simple conversion from amount staked and the average is simply the sum of per-node value divided by the total number of nodes.

Second, this parameter will need adjusting only a handful of times. Perhaps once a day for the first couple days, once a week for the first few weeks and perhaps spot checks and adjustment if necessary once a month for a few months. This parameter is needed primarily because the transition period is expected to be short compared to the WAGMI time scale. Over the long term, it will not be necessary to continually monitor and adjust this parameter, even if average.node.multiplier slowly drifts up or down over long time periods. The reason for this is that any drift that is slow compared to the WAGMI time scale (month) will automatically get fed into the WAGMI feedback mechanism and rewards.per.realay will automatically adjust for any slow nudge of rewards up or down caused by a slow drift in average.node.multiplier

Dissenting Opinions

“Voting on this proposal should not take place until PIP-22 development is fully completed and the network is upgraded with this consensus change. This will also give more time for this proposal to marinate in people’s mind.”

PIP-22 development is nearly comple and has already been merged into staging branch for QA. There has been plenty of time for the proposal to marinate in people’s mind and deal with various dissenting opinions/implications that have been raised.

“We should wait for the light client, turn down inflation to 20% or less to control node count bloat, and see how things shake out between that and PUP-19 for a few months.”

We have had a month now to observe reaction of system to PUP-19. It is not unreasonable to layer PIP-22 (which was passed in the same time frame as PUP-19 and never had a mandate to wait months after PUP-19 for implementation) into the system. As to light client, the whole point of PIP-22 was to get a consolidation mechanism into the system in a time frame shorter than being ready to deploy light client universally.

“PUP-21 will encourage greater consolidation among poorer QoS nodes (and thus nodes with below average rewards) than among high-QoS nodes. This will lead to fewer poor-QoS nodes in the cherrypicker, thus reducing the average daily rewards of the high-QoS nodes as their outsized share of cherry-picker probability is reduced.”

From a system perspective it is always a good thing to improve the QoS Pocket offers to our app developers. After all they are the ones who will eventually keep the lights on. It is true that PUP-21 in conjunction with PIP-22 should cause a QoS boost to the choices within the cherry picker. This may cause a SLIGHT reduction in reward differentialthat an above-average-QoS node currently enjoys. The same would be true of ANY mechanism that causes system-average QoS to improve. This is a good thing, not a cause for concern.

“Right now chains require a minimum of 24 nodes per session, If the consolidation factor is too aggressive, we might see service quality problems for smaller chains if they end up going under the required amount of nodes to support them.”

The engineering team did not find any chain for which this is a red flag given a weighting ceiling of 60k POKT. There are mechanism in place to encourage sufficient node participation in the newest chains, and natural market dynamics incentivize new nodes to add an under-served chain since under-served chains lead to above-average rewards per node.

“If average consolidation on a small chain is much less than the average consolidation on the big chains, the ValidatorStakeWeightMultiplier may be set so high that it cause rewards per node to drop significantly on the small chain. This may incentivize node runners to drop the chain and replace it with a chain that offers better rewards, leading to a potential QoS issue for the small chain.”

Consoidation variance from chain to chain may cause some reduction of reward count on small chains that experience less than expected consolidation. The risk of this variance being greater than 20% or so is small. In the event that the reduction of average reward on a given chain is enough to induce some node runners to drop the chain, natural market dynamics will cause the node count on the chain to stabalize since the reward count per node will increase back up to system norms with every node that drops off.

“Asking the Foundation to adjust ValidatorStakeWeightMultiplier on an ongoing basis places an undo burden on the Foundation and introduces the risk of human error.”

There is precedence for the Foundation to undertake the periodic setting of a system parameter in response to a PUP (WAGMI directive on setting RelayToTokenMultiplier). Foundation has indicated that adjusting ValidatorStakeWeightMultiplier wil not be a burden given that the exact methodology for setting the parameter has been specified. Further, there will be no need for the Foundation to have to calculate the weighted bin multiplier by hand - Andy has agreed to provide the Foundation with a script to calculate this weighted average. This will both save time and reduce the likelihood of human error. In the case of a human error, nothing catastrophic will happen in the system. The worst that can happen is a block or two transpiring with fewer or more than expected rewards. There is no outlier error that can happen since the parameter is rangebound between 1 and 4.

Analyst(s)

Copyright

Copyright and related rights waived via CC0.

4 Likes

We are in support of this proposal, provided that we include a final bucket at 75,000. We think that this will create the best case scenario for consolidation as was outlined in the original proposal and it will provide greater backing to those seeking to consolidate into validators who end up falling out of the ‘validator race’ for one reason or another.

1 Like

One thing I would like to add for your consideration @msa6867 , is the fact that right now chains require a minimum of 24 nodes per session, if the consolidation factor is too agressive, we might see service quality problems for smaller chains if they end up going under the required amount of nodes to support them.

1 Like

I think this is really smart. This provides a meaningful incentive to consolidate while setting reasonable boundaries that will allow for optimization between reliability and performance. I really like the adjustor, which will enable us to put in another dial to be adjusted according to the network needs.

My hope is that this will eventually be a parameter that is adjusted via an algo so the relays to reward per stake can flex with network demands in a more real time basis instead of these network disruptions of adjustments that require network consensus every time.
Great work @msa6867 . I support this.

1 Like

Thanks for the feedback. If I do an update I will incorporate this feedback into the motives session. I assume 4x consolidation is not too much?

I’m good with this proposal as written.
No problem with 75k if that is preferred.

I support this proposal as-is. 60k cap sounds better than 75k cap at this point due to mentioned reasons (60k will have stronger effect on inefficient/overly-expensive node runners than 75k or any higher amount, which is a very important aspect). Ultimately, it can be increased to 75k if needed down the road if node count start going up faster than needed in the next couple months.

Also, 4x consolidation will likely result with 15k nodes in the whole network sometimes in August already which sounds decent and stable for current amount of demand.

Great work @msa6867 !

With the passing of PIP-19, I am concerned that linear rewards for each bin isn’t going to work well.

The reward for producing a block is now approximately 800 pokt and as a validator you can expect to produce 2-3 blocks a month. Meanwhile after the latest WAGMI adjustment servicers earn approximately 30 pokt per day, and with much greater infrastructure costs also. Consequently the rewards for block validation now already far outstrip the rewards for servicing pocket relays by quite a large margin.

Once you add in the effect of weighted stake rewards on top you end up with an extremely strong incentive to be in the top 1000 nodes, and if we’re not careful it will be completely unprofitable to run nodes otherwise. I don’t think that is in the best interests of the network.

Hi Stephen, I would like to understand your concern but am thoroughly confused by your post. Perhaps you can rework your comment and repost. To start, linear rewards (as opposed to exponent less than one) actually tips the scales the most in favor of servicer rewards vs validator rewards, which decreases not increases the incentive to be in the top 1000 nodes. And it has the opposite effect of your last statement, meaning it maximizes the “profitability” of running a node even without being a validator. I also do not understand you comment about servicers having “greater infrastructure costs”. That makes no sense. At the moment validator and non validator have exactly the same infra cost, or if there is a dif, it would be the validator that has the greater infra cost.

To address your last point, servicers have greater infra costs because they need to run the RelayChain nodes (Ethereum etc) in addition to the Pocket nodes.

Stephen is concerned that, with the passing of PUP-19, node runners will be incentivized to focus on being validators and stop maintaining their RelayChain nodes. So he’s advocating for PUP-21 to be more aggressive about rewarding service (to ensure it remains attractive).

If servicers cap out at 60k POKT but the validator rewards continue to scale beyond that, with 5% block rewards going more often to larger validators in proportion to their stake, node runners might abandon horizontal scaling with nodes capped at 60k (and the RelayChain nodes required to provide service) and just focus instead on being validators.

I think actually having it non linear will have the opposite effect here. If non linear there will be a reduced number of nodes in the top bin making it “cheaper” to run a validator node. My estimation is when linear the top bin will be far greater than 1000 nodes in capacity. This makes people more inclined to stake above the top bin to secure a slot, thus making it more costly to run a validator.

Validators do consume quite alot more resources (particularly RAM) from our experience.

1 Like

Thanks for the clarification

Correct, Linear weighting maximizes the effect of tipping the scale toward servicers. 4x consolidation will undo most of the advantage gained by moving proposer alloc to 5%. Pre PUP-19 a validator staked to 100k POKT may earn (on average) 75 POKT/day as a validator and 30/day as a servicer. Post PUP-19 but Pre PIP-22, that number jumps to ~ 400 POKT/day as a validator and stays the same at 30/day as a servicer. Post PIP-22/PUP/21 assuming cap=60k, this falls back to about 75 POKT/day as a validator, while the servicer portion jumps up to 120 POKT/day. IF I have 6M POKT to stake, it will be sort of a toss up as to whether to do 60 validators at 100k or 100 servicer-only at 60k. Therefore it is unlikely for competition to push validator average up that high… it will probably sit around 70 to 80k. If we want higher, proposer alloc would need to be raised again.

Post PIP-22/PUP/21 assuming cap=60k, this falls back to about 75 POKT/day as a validator

I’m probably missing the obvious here, but why will PIP-22/21 cause validator rewards to fall?

I think the crux of this issue is just how much additional capital a node runner will need to stake on top of the 60k top bin threshold in order to enter the validator set. If it’s a lot more then this proposal won’t have too much additional impact, but if it’s not too much extra on top then in practicality there’s going to be huge additional benefit to being in the top bin. And with ongoing WAGMI adjustments, over time it could become unprofitable to operate outside of it at all.

Currently you only need to stake approximately 16500 pokt to be in the top 1000, but of course validator rewards were only changed very recently.

The ultimate problem is the validation reward but I do think PIP/PUP/21/22 could potentially exaggerate the issue, and conversely with a reward weighting slightly towards the bottom bin it could alleviate it. Very happy to be shown the errors in my thinking!

validator rewards stay the same. But the bar to get in to the top 1000 will rise to over 60k… therefore same amount of reward for roughly 4x the POKT staked means the reward per 15k pokt staked goes down

It varies from chain to chain, some chains are more heavily staked than others, my recommendation is to find a way to figure out the least staked chain and see if after a 4x consolidation we will have enough nodes for that chain to be supported. Maybe POKTscan offers enough information to make such an analysis.

Very much in favor for this change and also fine with it going a bit higher to 75k if needed.

Block 64616:

{
  '0001': 22902,
  '0003': 33124,
  '03DF': 28426,
  '0004': 31267,
  '0005': 31304,
  '0009': 35605,
  '0021': 35937,
  '0022': 24246,
  '0025': 19733,
  '0026': 25120,
  '0027': 35696,
  '0028': 24878,
  '0040': 35590,
  '0044': 15143,
  '0006': 1602,
  '0023': 7421,
  '0053': 6552,
  '000C': 2642,
  '0024': 3030,
  '0047': 4736,
  '000B': 3187,
  '000A': 5182,
  '0010': 222,
  '0048': 1129,
  '0050': 341,
  '0051': 323,
  '0052': 319,
  '00A3': 55,
  '0054': 61,
  '0029': 9
}

Thanks you @poktblade!
Regarding Osmosis Mainnet (0054) I think we will be ok as I think there will be pretty rapid traction among node runners to add the chain.

Regarding Avalanche Archival (00A3) and ??? (0029) are these active? Thoughts?

Also, further insight on the following is needed. My understanding is that most service providers share app chain state among a collection of pocket nodes… so I’m thinking that reducing pocket node count should not create a degradation to current QoS if node consollidation does not cause the count of app chain states distributed across the network to also go down.

E.g., if a chain currently has 64 pocket nodes implemented as 4 app chains each with a cluster of 16 pocket nodes, and this reduces to 16 pocket nodes implemented as 4 app chains each with a cluster of 4 pocket nodes, the QoS is more or less identical: in the latter, there are only 16 entries populated in the selector array, but in the former, a QoS hit to one cluster could cause a whole slew of selector slots being filled with equally poor QoS entries… Is this way of thinking ok or is it off?