PUP-21 Setting parameter values for the PIP-22 new parameter set

Sure, but your expectation as outlined in your previous comment is for the vast majority of nodes competing for validation to tap out at ~50% above the max stake. In the current validator pool, nearly half the validators are double or more that threshold, a third are quadruple or more, and 12% are 8x that or better, with 16 over 100K and max validator stake over 300K.

There’s a demonstrated willingness to go far above and beyond the base servicer awards in competing for multi ticket validation which has increased with the validation rewards increasing to 5%. If the current ratios held true, with a 60K max stake we could expect nearly half of validators to shoot for the 160K range, and perhaps a third to shoot for the three ticket range.

Admittedly, the light client’s impact on price is likely to shift this formula more in favor of horizontal servicer nodes, depending on performance at scale. But that’s also why I think having a higher (4x) max stake for servicing is so important; the light client is going to encourage node count bloat once again.

This should be shelved for now, in my opinion. We should wait for the light client, turn down inflation to 20% or less to control node count bloat, and see how things shake out between that and PUP-19 for a few months. At 200 million yearly inflation the most the node count can go up is 13k, which is not the end of the world. And according to the schedule released by the dev team, V1 is not that far off, and weighted stake can be revisited at that time.

I request that we move this to a vote ASAP, please. A significant number of network moves are reliant on the outcome.

1 Like

I understand the sentiment and appreciate that a significant number of network moves are reliant on the outcome. However, it is simply not ready to move to vote yet as it is awaiting adequately addressing some concerns that have been raised regarding chains with only a small number of nodes. That work is going on in the background, and as soon as it is done there will be an update to the proposal followed by the ability to move it to a vote. I am hopeful the timeline for that will be within the next several days. It should certainly be in time for being ready to turn on as soon as PIP-22 is ready for production release

I believe the community sentiment in passing PIP-22 was implicit approval to utilize the knobs to offer some consolidation ability. To that end I view PUP-21 as a measure of more where to set the knobs rather than revisiting whether to use the knobs at all. We cannot wait for light client to “save” pocket. While light client may have some benefit to pocket, it comes at a huge cost, namely loss of control. Controllable mechanisms that duplicate some of the benefits that LC provides is part of the answer to keep LC from becoming a tool in the wrong hands to harm the system. To that end I strongly urge you to continue the work to get PIP-23 ready to put to a vote and passed

1 Like

I’d have to disagree with that based on some of the comments in that thread. I think PIP-22 was green light to develop the feature in case it was needed, with no implicit approval to turn any knobs. At least not right now.

I have finished edits on this proposal, adding clarification for how the Foundation will set the ValidatorStakeWeight Multiplier and adding a Dissenting Opinion section.

The proposal is ready to move to a vote.

1 Like

This proposal is now up for voting Snapshot

Hello from the Poktscan team! The implementation of PIP-22 triggered our data science team to run Monte Carlo simulations on the Pocket network with the intent to use the results for advising our customers on compounding options.

Key findings point out two issues not documented in PUP-21 or PIP-22:

  1. Increasing the ValidatorStakeWeightMultiplier parameter will trigger a reduction in rewards for those maintaining 15K validators, even if they are good performers. If the intention of these proposals is to discourage 15K node runners, it should be documented as such.
  2. Compounding node runners with poor performance will be rewarded equally as those with good performance. We understand that once the network reaches an equilibrium we will reach a status quo, those that perform well and those that do not. Understanding that the impact of node consolidation is positive, there is also an opportunity to encourage node performers to get better.

We are publishing our findings, the simulated data, and ways to test the PIP-22 parameters on the simulated scenario. We welcome any comments and challenges to the simulation and approach.

The public report and the code can be found here:

2 Likes

From the linked repository:

We propose to enable the DAO to implement a non-linear weight staking.

I just want to clarify if by non-linear here you mean ValidatorStakeFloorMultiplierExponent > 1 or ValidatorStakeFloorMultiplierExponent < 1?

Strictly <1. Otherwise nodes with high compounding levels will earn even more.

It might be important to get clarification from someone working on the PIP-22 implementation about this, but given that the value of this is used directly for computing rewards; I’m almost positive that this value is going to need to be an integer. If that is the case, I’m not sure that your proposed remediation is possible under the current mechanism of PIP-22.

1 Like

Exponent is a decimal number 0-1 in 0.01 steps. In PIP22 changes the rewards are calculated in decimal and then truncated back to int post calculation pocket-core/reward.go at 2a30eba1d9e5e251401e49d9fcf05397b83a8c0e · liquify-validation/pocket-core · GitHub.

I will check the report in the morning late here now but first glance I think you guys are calculating the weight incorrectly (again it’s like midnight here and been a long day so will double check in the morning)

Re 1: The intention of the proposal is NOT to discourage 15k node runners. There may be SOME consolidation patterns that emerge where variance causes reward count that used to be be seen prior to consolidation to shift up or down by some some percentage or another (of order 20% in the worst probabalistic case) for some user group or another. The case of the high-QoS small node runner was looked at per your previous feedback prior to putting to a vote as well as a number of other cases having to do with small chains. Weighing probability of occurrence vs impact of occurance we feel comfortable that all risks are well managed. Furthermore, we have already pointed out in the dissenting opinion section above that any reduction in reward differential that a high-QoS node runner used to enjoy above system average will be precicely because system-average QoS is improved by PIP-22/PUP-21 which is a good thing, not a negative thing. The exact same reduction in reward idfferential would occur if all competing nodes, for example, figured out how to get there latency to under 150 ms. Since everyone now has low latency, the differential reward advantage of a node runner who use to run the only low-latency node provider in town would dissapear and his reward would drop to system average. It would be absurd to think that the system ought to discourage other node runners from improving their latency in order to protect the existing reward differential of the node runner who already has low latency.

Re 2. It is well understood and previously acknowledged that allowing nodes to consolidate may decrease the pain pressure on high-cost or low-QoS service providers and node runners sufficiently to cause them to slack off for a season on expending the time, effort and dollars needed to continue reducing per-node costs and improving QoS. The alternative is to do nothing and allow maximum pain pressure (negative returns etc) to drive out all but the very fittest. Ultimately the voters will have to decide which is best for this season of Pocket. I do point out, however, that it is in part for this very reason that we rejected a more aggressive consolation ceiling (see motivation 3 of the proposal). I also point out that node optimization efforts seem to be in full swing across all the large node runners and the momentum of that effort will keep optimzation efforts ongoing even after PUP-21 is implemented.

As to your monte carlo model, I will not have time to look at it this week, but an assertion made in the TG chat that the decrease in reward to 15k node runners will persist and even be exacerbated in the event that consolidation is uniform across all QoS tiers does not add up. I advise to rerun the model inputting uniform consolidation and check for possible errors.

ValidatorStakeFloorMultiplierExponent is rangebound between 0.00 and 1.00 inclusive, nominally it would be a “number” rather than an integer; practically it was implemented as a BigDecimal

It must be a non-integer (real number or decimal - though the numbers 0.00 and 1.00 are allowed values) and the exponent function must be a real number (or equivalent) not integer function. In actuality, Andy coded it as a decimal constrained to two decimal places and replicated the functionality of a non-integer exponent by applying an integer exponent to 100x the parameter value combined with taking a 100th-root… which is an elegant way to save a little on memory and processing.

We think that the supositions that we make about the possible post-PIP-22 are as valid as yours. We show that there is a possible sceneario on wich the 15k node runners are receiving less rewards for not having done any consolidations. Any other scenario is possible, but we show that there existis at least one that is harmfull for small node runners. Moreover we show that a simple change in the proposed parameters can give the DAO the opportunity to remedy this situation.

This can happens with or whithout PIP-21/PUP-22. Many node runners have already figured out how to have a high QoS withouth the need of implementing any kind of consolidation. We welcome the reduced gains from this source.

The expected reduction of rewards due to a higher QoS of the network is present in our model. The overall QoS of the network will rise if the consolidation is uneven along the QoS tiers (as we sugested), however this increse will not be due to an increment in the QoS of nodes but rather from the compounding of low QoS nodes. The Cherry Picker is already filtering low-QoS nodes from servicing, the average increase in QoS of the network wont be necessarily reflected as a higher QoS for the application, since the number of high-QoS nodes will be lower in absolute terms.
Also the compounded low-QoS nodes will see more gains because they are compound not because they have increased their QoS and part of this gain will come at the expense of non-compounding nodes.

Whether or not the mentioned assertion is correct (we can run those numbers if needed), it will only prove that there is at least one scenario where the problem we point out does not arise. It will not invalidate the result we have shown. Either scenario is equally posible given the available information.


We only want to ensure that the DAO will have all the tools for addressing any possible scenario.

Exponent is a decimal number 0-1 in 0.01 steps.

Thanks for the clarification. The spec in PIP22 and psuedocode read as the opposite.

1 Like

No problem!. Yes it was modified during development of PIP22. I had assumed the calculations were done in decimals already.

Please clarify. The spec in PIP-22 reads, " 1. ValidatorStakeFloorMultiplierExponent - Defines the linearity of the stake-weighting curve, with 1 being linear and <1 non-linear." True PIP-22 didn’t explicitly say that exponent couldn’t be negative, so implication of “1 being linear and <1 non-linear” implies a value between 0 and 1

Pseudocode in PIP-22 reads:
// Calculate weight
weight = (flooredStake/ValidatorStakeFloorMultiplier)^(ValidatorStakeFloorMultiplierExponent)/(ValidatorStakeWeightMultiplier)

I believe this is consistent with the above. the only confusing part being that flooredStake/ValidatorStakeFloorMultiplier is nominally an integer, so the pseudocode leaves out an implied conversion to non-integer equivalent of this bin multiplier (i.e 1.0, 2.0, 3.0 or 4.0 rather than 1,2,3 or 4) in order to complete the calculation