From the linked repository:
We propose to enable the DAO to implement a non-linear weight staking.
I just want to clarify if by non-linear here you mean ValidatorStakeFloorMultiplierExponent > 1
or ValidatorStakeFloorMultiplierExponent < 1
?
From the linked repository:
We propose to enable the DAO to implement a non-linear weight staking.
I just want to clarify if by non-linear here you mean ValidatorStakeFloorMultiplierExponent > 1
or ValidatorStakeFloorMultiplierExponent < 1
?
Strictly <1. Otherwise nodes with high compounding levels will earn even more.
It might be important to get clarification from someone working on the PIP-22 implementation about this, but given that the value of this is used directly for computing rewards; Iām almost positive that this value is going to need to be an integer. If that is the case, Iām not sure that your proposed remediation is possible under the current mechanism of PIP-22.
Exponent is a decimal number 0-1 in 0.01 steps. In PIP22 changes the rewards are calculated in decimal and then truncated back to int post calculation pocket-core/reward.go at 2a30eba1d9e5e251401e49d9fcf05397b83a8c0e Ā· liquify-validation/pocket-core Ā· GitHub.
I will check the report in the morning late here now but first glance I think you guys are calculating the weight incorrectly (again itās like midnight here and been a long day so will double check in the morning)
Re 1: The intention of the proposal is NOT to discourage 15k node runners. There may be SOME consolidation patterns that emerge where variance causes reward count that used to be be seen prior to consolidation to shift up or down by some some percentage or another (of order 20% in the worst probabalistic case) for some user group or another. The case of the high-QoS small node runner was looked at per your previous feedback prior to putting to a vote as well as a number of other cases having to do with small chains. Weighing probability of occurrence vs impact of occurance we feel comfortable that all risks are well managed. Furthermore, we have already pointed out in the dissenting opinion section above that any reduction in reward differential that a high-QoS node runner used to enjoy above system average will be precicely because system-average QoS is improved by PIP-22/PUP-21 which is a good thing, not a negative thing. The exact same reduction in reward idfferential would occur if all competing nodes, for example, figured out how to get there latency to under 150 ms. Since everyone now has low latency, the differential reward advantage of a node runner who use to run the only low-latency node provider in town would dissapear and his reward would drop to system average. It would be absurd to think that the system ought to discourage other node runners from improving their latency in order to protect the existing reward differential of the node runner who already has low latency.
Re 2. It is well understood and previously acknowledged that allowing nodes to consolidate may decrease the pain pressure on high-cost or low-QoS service providers and node runners sufficiently to cause them to slack off for a season on expending the time, effort and dollars needed to continue reducing per-node costs and improving QoS. The alternative is to do nothing and allow maximum pain pressure (negative returns etc) to drive out all but the very fittest. Ultimately the voters will have to decide which is best for this season of Pocket. I do point out, however, that it is in part for this very reason that we rejected a more aggressive consolation ceiling (see motivation 3 of the proposal). I also point out that node optimization efforts seem to be in full swing across all the large node runners and the momentum of that effort will keep optimzation efforts ongoing even after PUP-21 is implemented.
As to your monte carlo model, I will not have time to look at it this week, but an assertion made in the TG chat that the decrease in reward to 15k node runners will persist and even be exacerbated in the event that consolidation is uniform across all QoS tiers does not add up. I advise to rerun the model inputting uniform consolidation and check for possible errors.
ValidatorStakeFloorMultiplierExponent is rangebound between 0.00 and 1.00 inclusive, nominally it would be a ānumberā rather than an integer; practically it was implemented as a BigDecimal
It must be a non-integer (real number or decimal - though the numbers 0.00 and 1.00 are allowed values) and the exponent function must be a real number (or equivalent) not integer function. In actuality, Andy coded it as a decimal constrained to two decimal places and replicated the functionality of a non-integer exponent by applying an integer exponent to 100x the parameter value combined with taking a 100th-rootā¦ which is an elegant way to save a little on memory and processing.
We think that the supositions that we make about the possible post-PIP-22 are as valid as yours. We show that there is a possible sceneario on wich the 15k node runners are receiving less rewards for not having done any consolidations. Any other scenario is possible, but we show that there existis at least one that is harmfull for small node runners. Moreover we show that a simple change in the proposed parameters can give the DAO the opportunity to remedy this situation.
This can happens with or whithout PIP-21/PUP-22. Many node runners have already figured out how to have a high QoS withouth the need of implementing any kind of consolidation. We welcome the reduced gains from this source.
The expected reduction of rewards due to a higher QoS of the network is present in our model. The overall QoS of the network will rise if the consolidation is uneven along the QoS tiers (as we sugested), however this increse will not be due to an increment in the QoS of nodes but rather from the compounding of low QoS nodes. The Cherry Picker is already filtering low-QoS nodes from servicing, the average increase in QoS of the network wont be necessarily reflected as a higher QoS for the application, since the number of high-QoS nodes will be lower in absolute terms.
Also the compounded low-QoS nodes will see more gains because they are compound not because they have increased their QoS and part of this gain will come at the expense of non-compounding nodes.
Whether or not the mentioned assertion is correct (we can run those numbers if needed), it will only prove that there is at least one scenario where the problem we point out does not arise. It will not invalidate the result we have shown. Either scenario is equally posible given the available information.
We only want to ensure that the DAO will have all the tools for addressing any possible scenario.
Exponent is a decimal number 0-1 in 0.01 steps.
Thanks for the clarification. The spec in PIP22 and psuedocode read as the opposite.
No problem!. Yes it was modified during development of PIP22. I had assumed the calculations were done in decimals already.
Please clarify. The spec in PIP-22 reads, " 1. ValidatorStakeFloorMultiplierExponent
- Defines the linearity of the stake-weighting curve, with 1 being linear and <1 non-linear." True PIP-22 didnāt explicitly say that exponent couldnāt be negative, so implication of ā1 being linear and <1 non-linearā implies a value between 0 and 1
Pseudocode in PIP-22 reads:
// Calculate weight
weight = (flooredStake/ValidatorStakeFloorMultiplier)^(ValidatorStakeFloorMultiplierExponent)/(ValidatorStakeWeightMultiplier)
I believe this is consistent with the above. the only confusing part being that flooredStake/ValidatorStakeFloorMultiplier is nominally an integer, so the pseudocode leaves out an implied conversion to non-integer equivalent of this bin multiplier (i.e 1.0, 2.0, 3.0 or 4.0 rather than 1,2,3 or 4) in order to complete the calculation
the only confusing part being that flooredStake/ValidatorStakeFloorMultiplier is nominally an integer
Yeah, this was the confusion. I just wanted to make sure that there werenāt incorrect assumptions about what levers we had to tune this value. Given that weāre working with on chain token balances, I personally donāt assume decimal ā integer type conversions come for free. Didnāt mean to clog anything up here.
Yes there is at least one such scenario as we had already pointed out in the proposal. But thank you for the excellent work in modeling. As to whether or not it is actually āharmfulā to have the differential reward advantage enjoyed by a high-QoS node shrink, that is probably a bit overstating things
Absolutely agreed. This is one of the reasons that I wanted to make sure the exponent knob was available to the DAO (if you reall back to the June discussions on PIP-22) even if we choose not to use it at first. We chose not to go with nonlinear weighting as an initial setting in this proposal because we did not discern that there was sufficient community support to back a nonlinear weight ing at this time and also to embrace the motivation of ākeep it simpleā. If it turns out that the feedback from the community is to reject PUP-21 because it has linear rather than non-linear weighting, we will be more than glad to pause the current vote and resubmit with a exponent of 0.8 or so. My best guess, however is that the best path forward is to proceed with linear weight and then if it turns out that some possible edge case materializes we always have the ability to submit a new PUP to lower the exponent from 1.0 to a smaller value
I appreciate the sharp eyes!
Can you elaborate more on this āone scenarioā thatās pointed out in the proposal? Iām having issues understanding this edge case and the impacts of it
Let me elaborate. Suppose with consolidation 36k nodes consolidates to 12k nodes having average bin size of 3.0. In response, foundation sets ValidatorStakeWeightMultiplier to 3.0. In most scenarios, this will cause daily rewards per 15k pokt staked to be the same after consolidation as before. Take for example a node staked to 15k who opts not to consolidate. Suppose before consolidation this node earned ~ 30 POKT per day (roughly 23k relays x 0.0013 TokenToRelayMultiplier). After consolidation, this nodes reward per relay would drop by a factor of 3 because of the ValidatorStakeWeightMultiplier being set to 3.0ā¦ but this will be balanced by its being regionally selected into a 24-node cherry-picker 3x as often.
The edge case has to do with probability of selection to service a relay within the cherry picker. In most scenarios P_selection after consolidation will be the same after consolidation as before. However, suppose consolidation took place preferentially among high-latency nodes while low-latency nodes did not consolidate. Then the average number of low-latency nodes in the cherry picker will go up. In this scenario, my 10 tickets within the cherry picker translates to less P_selection after consolidation than before.
If the whole system (except my node) had very high latency then no difference: both before consolidation and after, I get 10 tickets and the other 23 nodes get 1 ticket. Hence I have a P_select of 33% (10/33). Same if the whole system had low latency - no difference: both before consolidation and after: all 24 terminals get 10 tickets and I have a P_select of 4% (10/240).
The edge case would be strongest where only high-latency nodes consolidate and there are roughly 5 to 20% low-latency nodes prior to consolidation. This is where I would expect a greater probability of seeing more competing nodes with low latency in the cherry picker as a result of consolidation. For example, if before consolidation there averaged 1 other low-latency node in the cherry picker besides my node, I would have a P_select of ~25% (10/42). After consolidation there might now be 2 or 3 other low-latency nodes in the cherry picker pool besides my node so my P_select would drop to the range of 17% (10/60) to 20% (10/51)ā¦ Hence my node might see daily rewards drop from ~ 100 POKT/day (super high because of my low-latency advantage) to 60 to 80 POKT/day (still beating system average but not as drastically as before.
This is the edge case brought up by poktscan. You can see my answer to this edge case concern in the dissenting opinions section of the proposal
MSA, thanks for the summary. It was truly really helpful.
Given that there is one scenario where small node runners without consolidation can be impacted and following the ādo no harmā principle, I am more in favor of having non linear than linear staking. Instead of going backwards to fix an edge case scenario, I say we flip it around and slowly increase the exponent up to 1 if everything plays out as it should. I do not see much dissenting opinion that this shouldnāt be the case. There was only conversations regarding non-linear vs linear, but no strong indicator this shouldnāt be the case.
We considered both ramping up cap (2x to 3x to 4x) and ramping up exponent (0.5, 0.6, 0.7 etc) pausing at each step to study system impact. We rejected both approaches because making a decisive step to final state encourages greater consolidation than would take place using gradual ramp up. That benefit, to our thinking, outweighs having to cover an edge case \that we do not even consider to be negative system behavior. Once we turn on linear weighting, we can always choose to dial back exp to 0.9 or 0.8 without triggering a large negative response among those that maxed out consolidation.
These parameters decide whether someone should max out on consolidation or not, especially with linear staking. Bumping it down may not sound as easy as it seems if the majority of the network decides to consolidate to max. I believe there is less friction in being conservative and bumping it up - rather than convincing a load of maxed consolidated nodes that their rewards are going to be cut if we need to dial back.
Let me clarifyā¦ what I was trying to say is that if we dial back from exp=1.0 to only 0.9 or 0.8 that is not so drastic as to undermine a node-runnerās premise for staking to max consolidation possible. It ay be a nuisance to large-consolidators to see their avg reward go down slightly and redistributed to those who chose not to consolidate at all, but it is not enough to cause a second-guessing as to whether consolidating to max was the right choice. I agree with you when it comes to comparing exp 0.5 to 1.0 for example, but I view the 0.8 to 1.0 range (and especially the 0.9 to 1.0 range)to be pretty frictionless. Further, I think the whole premise of this edge case (high-latency node runners will consolidate to greater degree than low-latency node runners) to be pretty suspect and unlikely to materialize. Maxing out consolidation is the current goal. hence exp 1.0; setting exp to substantially less than 1.0 will not achieve the same degree of consolidation.