The main thing is to not divide by 15,000 two separate times else you’ll get a weighting factor of order 0.0001 rather than in the range 1 to 5 (or whatever max weight is desired).
Since ValidatorStakeWeightMultiplier is a DAO-controlled parameter that only occurs in this one location of the code, it is an implementation choice whether to place it inside or outside the exponent. Just realize that the choice of implementation will affect the value that is chosen. For example, if exponent is 0.5 to do sqrt weighting, and ValidatorStakeWeightMultiplier were implemented outside the exponent, then setting ValidatorStakeWeightMultiplier to 4 would make sense to keep top-bin rewards unchanged in the most likely consolidation scenario, but would be set to 16 to get the same behavior if implemented inside the exponent (since taking the sqrt of 16 gets back down to 4.) Either way is fine, but I think placing outside the exponent makes more intuitive sense and keeps this coefficient acting as a linear (or inverse linear) knob even if the weighting itself is nonlinear
The total inflation of the network under this proposal over a certain time frame can be described with the following summation. (P.S. I am including it for concreteness, clarity, to minimize confusion, and because of msas precedent)
On larger time frames, the relays performed by a node per session becomes equal across the network, so for this example, we can also factor out Relays from the sum.
This has some serious implications. Firstly, this means that in order for the network’s inflation to remain the same, the network would have to fully consolidate into nodes that meet the weight ceiling. (I need to integrate the exponentiation which I will do, but I just wanted to get this out there. This effect still stands, but to a lesser degree.). After PIP-22 were to be implemented, the network rewards and inflation would be reduced by 15000/ServicerStakeWeightCeiling. This would make node running unprofitable for many, including small node runners.
MSA alluded to this as well.
Furthermore, a node’s rewards become dependent on the consolidation of the rest of the nodes on the network.
The probability of being selected into a session under the current session selection algorithm is as follows
If PIP-22 were to pass and the amount of pokt staked on the network were to remain the same, averageStakeFloor could describe the number of nodes on the network.
Substituting this back into the session probability, we have
This describes the total rewards of a node, which is not a great representation of this proposal since what ultimately matters is rewards per pokt staked. So, to glean this value, we can divide NodeReward by StakeFloor
The probability of session selection would increase if the network were to consolidate and the AverageStakeFloor were to increase. Relays per session would increase as well.
This equation indicates that the rewards of a node are dependent on probability of session selection, which is dependent on the amount of consolidation of the rest of the network. And light-client, which makes the additional costs of more servicers marginal and removes the need for an unstake, significantly diminishes this incentive to consolidate. The network as a whole has an incentive to consolidate since this would increase the AverageStakeFloor and thus the rewards; however, individual node runners using light client don’t have an incentive to consolidate because it would have a marginal effect on AverageStakeFloor.
So in summary, PIP-22 would lead to an unsustainable reduction in rewards instantenously, introduce an inability to return to previous reward levels and a dependency on other node runners to consolidate to return to previous reward levels (who might be using light client and will not have an incentive to consolidate). MSA describes this in his post on this proposal where C is near zero.
So, in order to remedy the aforementioned issues.
You could gradually increase the ServicerStakeWeightCeiling parameter; however, this wouldn’t make a ton of sense as it would require multiple consolidations.
You could increase the validator incentive to encourage light client users to consolidate, but this would require an additional GOODVIBES-esque discussion
Modify the poktperrelay parameter in accordance with the averagestakefloor (I need to think more about this)
There is a notion that this proposal achieves the same thing as vanilla stake weighting, but the externalities above indicate that this is not the case.
I think that we should implement stake weighting as originally proposed in PIP-23 (with some small modifications). Stake weighting preserves rewards and ensures fairness across the whole network, bolsters the validator set tremendously, and doesn’t introduce reward reductions, complexity, and uncertainty. If we are already going to undergo a consensus change for PIP-22, we should do it the right way and implement vanilla stake weighting.
Here’s the google doc link if you’d like to have better views of the equations.
Thanks @Addison. I will work through the reply and accompanying doc. At first blush I do not see how how aggregate rewards drops with PIP-22 if ValidatorStakeWeightMultiplier is initially set to 1.0 and is only adjusted upward during the first month in response to aggregate consolidiation effects.
Agreed that weighting the probability of being selected as a servicer of a relay has much going in its favor as opposed to weighting the reward per relay. It is theoretically cleaner and hs the main benefit of decoupling an individual’s decision to consolidate or not from the behavior of other validators. However, it would also be a fundamental shift in the direction and future of pocket and according to many of the responders would be more difficult to implement. Thus for speed of action to bring immediate relief to validators trying to survive the current bear market and doing so without changing the democratic underpinnings of servicer selection, I favor passing PIP-22.
Forcing a graduated consolidation is not a bad idea. Is the network really that fragile that it would strain it to have multiple episodes of consoliaion. Eg., DAO could publish the change of PIP-22 and indicate that day one MaxValidatorStakeFloorMultiplier will be set to 2. And each month it will be increased by at most +1 subject to DAO review of how the network as a whole and economic conditions across various classes of validators has responded to the consolidation up to that point. That being said, I think we are also fine immediately setting it to 5. But let me review what you’ve written…
Its possible to do this, but it would be extremely slow and require many consolidations. Say you have 10 nodes, 150500 pokt. If the parameter was set to 1.25, you could unstake your nodes and restake it onto 8 nodes. In the meantime, the network’s rewards would be reduced by 25%, which is still significant. Then, after everyone consolidates to bring the rewards back to equilibrium, should the parameter be slightly raised again, and then have everyone reconsolidate again? I’m not sure if there is an elegant way to do this.
Why do you think it causes a shift? People definitely think its difficult to implement, but I believe that it is very doable.
This would require everyone to unstake repeatedly though.
Sounds good. Looking forward. Bit nervous . Also, are you on discord/TG? I’m Addi#0007 and @thunderheadotc.
I have not joined the POKT discord room or TG group yet but need to… fairly new to POKT and got wind from discord poktpool that PIP-22 looked to be coming to a vote, so I made my way here to take a look.
Umm, mixing up parameters here I think?. Remember the discussions from earlier… with linear weighting (which I think we should go with on day 1 until we study the behavior of nonlinear weighting better), I am incentivized to consolidate to the maximum amount I can, limited only by my individual resources and MaxValidatorStakeFloorMultiplier; In this scenario ValidatorStakeWeightMultiplier, does not influence validator behavior. Whether set to 1 or 1.25 or 2 or 4 or 5, I am most profitable if I immediately consolidate to 5x or 8x or whatever max I am allowed. Setting it to 1 means the ones who can’t consolidate wont see a drop in rewards while those who can consolidate will see a boost of rewards. Setting it higher will proportionally dial down rewards for everyone - both big and small - to keep the transition period from being too inflationary.
Agreed. I’m not recommending it. Just saying it’s possible if somehow there was big concern over immediately setting max to a value like 5 or 8
Think for example of the debate going on in the ETH community on the same subject regarding the shift to POS. By tying probability of being selected as validator to the amount staked you risk concentrating validator decision making into the hands of too few big players which opens up a network to security issues related to the big validators colluding with each other to self-aggrandize at the expense of others. That is why I call it a fundamental philosophy/policy shift from pure-random selection.
Luis and I have pushed edits to the proposal that integrate most of @msa6867 's feedback and acknowledge @addison 's dissenting opinions.
Given that we’ve incorporated the feedback pertinent to refining the details of the proposed mechanism (stake-weighted rewards) and would be unable to incorporate the feedback that advocates instead for an alternate mechanism (stake-weighted session selection), we feel comfortable that this proposal is ready to go to voting.
I plan to put this proposal up for voting tomorrow. Note that voting will last 7 days (unless a 50% majority approves before the time is up) and discussions about the details of the proposal can still continue.
Thanks @JackALaing . Assuming PIP-22 gets the votes to pass I would very much like to be able to see proposed code changes before commit , There’s been so many different parameters proposed, removed, re-added, repurposed etc, that I would love to just check for consistency between proposed implementation and intention. Along that line, just to help future developers, auditors, etc it is probably prudent to choose variable names in keeping with their final role in the code. Eg., ValidatorStakeFloorMultiplier ought rather be called ValidatorStakeFloorMinimum or ValidatorStakeFloorBinSize, and ValidatorStakeWeightMultiplier might more aptly be called ValidatorStakeWeightDivisor, etc
Re “Validate Edit Stake” I assume intention is to always run with VEDIT =FALSE and only set TRUE under true emergency? Even so, when set to TRUE there is still a disconnect between the pseudocode and the intention.
(1) in the current pseudocode no one would be able to add tokens to replenish following a burn events as long as VEDIT=TRUE . Is this the intention? If not it is easily fixed
(2) the line
in “Proof lifecycle validation” sort of implies that ServicerStakeWeightCeiling is chosen to be a whole multiple of ValidatorStakeFloorMultiplier; whereas the line
in “Validate Edit Stake”
implies that this same parameter is set to some sufficiently large value greater than a whole multiple of ValidatorStakeFloorMultiplier in order to accommodate the need for a prudent reserve. Again, this is easily fixed such as by changing the line in Validate Edit Stake to:
Then again, is it really necessary to calculate FlooredStake twice (and that using two different methodologies)? Perhaps just pass the value obtained in “validate edit state” to “proof lifecycle validation”?
These are simply implementation-level details and have no bearing on the vote at hand - the jist of which I think is well understood.
The math is spot on up to this point, excepting that it places ServicesStakeWeightCeiling as the divisor rather than “ValidatorStakeWeightMultiplier” which is the correct parameter to use in the equations. My whole point in the above parameter-setting discussions is to set this parater to 1 on day one and adjust it over the first month to be on par with what you call “AverageStakeFloor” in order to keep total minted from getting out of whack during the transition. As long as total minted is more-or-less kept even, then to first order the small validators who cannot consolidate to larger bin increments should not see a reduction in rewards due to others consolidating.
There is a very real second-order effect that my come into play and ought to be a concern for everyone, namely that as soon as staking at max levels shows high profitability, then not only will those who can consolidate do so, but all sorts of new max-staked nodes are likely to flood into the space again, putting the squeeze on the small player until they are completely sqeazed out, causing another round of overprovisioned network and eroded profits for all until the reduced profitability once again causes new node staking to slow down. This is just simple economics 101. Setting the exponent parameter significantly less than 1 (e.g., to 0.5) will take care of that by making sure it is not overly-profitable to run max-staked nodes
I have not had a chance to look into light-client yet, but will do so. I have a feeling introducing light client my be even more important than PIP-22 in the long run. I’m not seeing incompatibility though, especially if we set the parameters right, adjusting ValidatorStakeWeightMultiplier to keep on par with AverageStakeFloor and dialing the exponent parameter to incentivize toward deploying a greater ratio of lite to regular clients or vice versa as the system needs
I appreciate all the work that has gone into figuring out PIP-22, but after having spent hours playing with the economics of this version of weighted stake, I will be voting no.
To have a successful economic model, you have to be able to keep a balance while addressing:
Number of nodes
Validation Security
Price changes
Inflation goals
The new parameters being introduced in PIP-22 are much too complex and sensitive to be effectively used to create a balanced economy. I’ve discussed the economics with others and so far I haven’t seen or heard of a working approach to mapping out how the economics would balance out. I feel the theory of PIP-22 is interesting, but I don’t see the real economics of it playing out in a balanced, predictable way.
I believe investing core-dev resources into PIP-22 would be an inefficient use of resources when we can easily increase the minimum stake to have the same impact outcome. Why invest time into a v0 feature that will have major economic issues, when we already have the params we need to address our needs?
Now that I’ve given PIP-22 a fair shake with seeing how the economics will play out, I will return my attention to PUP-17 and will be updating it will a more in-depth economic model.
I’ve done the same pondering and now tend to agree. The complexity this will introduce into the economy will make it very hard to keep the results predictable and will require constant tweaking as the network evolves. Imagine having to explain all this to someone looking to enter the network. I also don’t like the idea of paying more per relay based on stake amount, yes the end result is the same but the optics are bad.
I have always disliked the idea of raising min stake just because it would cut off small node runners and raise the entry barrier should token price increase significantly in the future. But, looking at the macro environment, not sure how much of a concern that is at this point. And people were buying in at $45k per node in January. It’s also a lot easier to decrease min stake in the future should the need arise. I think a min stake increase - along with a strong financial incentive to go above and beyond for validators - would be cleaner and easier to understand and implement at this point. And folks that fall below can beef up their stake or join a pool where they can benefit from economies of scale.
I think people have lost sight or maybe gotten lost in @msa6867 amazing but complicated maths. There is no reason it can’t be a balanced system once the consolidation point is found (like I mentioned on today’s call), the parameters give the DOA control over consolidation rates and this balance point (the balance point can be set in a similar way to WAGMI is currently). Once we know how consolidation is applied in the field the weight can be moved to the average point and there will only be a small steady state around this point (since it will be a fairly “true” average given a sample size of over 10k plus nodes)
Also I spoke with Luis the other day I’m happy to take on the development, reducing input from core team. I have already written/planned most of the code changes.
Undoubtedly there is complexity of modelling out the exact outcomes of this proposal with 100% of uncertainty removed.
However if that’s the barrier we’re setting for success then as far as I see it none of the proposals can pass. Be it GOODVIBES, PUP-19, PUP-17, doubling the minimum stake, the Phase Plan, SWANS, stake weighted session selection, you name it… None of them have economic outcomes that are 100% understood, with no ambiguity. You can tear holes in any of them.
What I’d say about this proposal though, is that it’s among the most simple, doesn’t completely screw any size of node-runner, and actually does include an adjustment mechanism built in (the RelaysToTokensMultiplier), so it’s not a complete 1 way ticket. You could implement the proposal, find it’s not working quite as planned and easily adjust it down the line. In fact if you use a completely linear reward model with no exponent, then set the relaytotokensmultiplier to 1 and the mechanism can actually be switched off completely if need be. Many of the other proposals do not have this failsafe.
Adjustment is also fairly simple for a linear model, and we already do perform regular adjustments for inflation control so I don’t see why this is a big concern here. You can just perform this relatively simple adjustment twice weekly to start and then eventually monthly, as we already do for WAGMI, and all is well.
It’s also true that all of the proposals are also built, to some extent or another, on questionable premises. The $100 per-node cost for example is pulled from thin air. Or the idea we need at least 200M pokt staked in validation or the network is at risk of attack. These are actually completely arbitrary numbers. Where is the model that says we need around 200M, not 300M or even 50M pokt staked in validation? Where is the properly conducted anonymised survey of the entire node-running community to determine what the cost to run a node ACTUALLY is, and how much profit it’s possible to actually make in today’s environment? How quantifiably necessary are ANY of these proposals?
It’s also worth nothing that we cannot even model how the economics of pocket will change into the future right now, even before we change anything! We cannot say for certain what will happen if the pokt price halves again, or triples, what will happen when the lite client gets released or how WAGMI will continue to effect the economics over time.
To reiterate: My overall point of this rant is that if we’re going to exclude this proposal because it’s possible to pick holes or point out complexities, then we can’t realistically pass anything IMHO.
I’ve had a little play around with @iaa12 fiddle to try and get my point across that with the weight scaling ( ValidatorStakeWeightMultiplier) to the average point the net inflation is no different from where we are now because people seem to be getting confused around this point.
Note: I am no JS developer (I’m an old school c/c++ engineer) so this was literally thrown together. I have added a new function calBalancePoint() this basically calculates the optimum bin position as:
Feel free to play around with the number of nodes in of the unconsolidated nodes section the outputs before and after will be more or less the same (baring some very small rounding/precision errors)
I have also increased the ceiling as an example because I didn’t have time to code this logic into the consolidation code.
I’d like to raise the possibility of this proposal being unconstitutional.
4.25. As the actors responsible for transaction finalization in Pocket Core, Validator Nodes have the power to overthrow the government apparatus of Pocket Network (outlined as the above Legislature, Executive, and Judiciary functions). They are trusted to represent the citizenry (Users) of Pocket Network because their Block Reward is tied directly to the number of relays being processed for developers (i.e. User usage). The Council must never approve Protocol Upgrades that would decouple this incentive alignment.
I think a strong argument could be made that this proposal ties Block Rewards to stake amount much more directly than to the number of relays being processed. Validator earnings would depend primarily on how much a node has staked, not on relays, since a node staked at 75k would earn 5 times more than a node staked at 15k for the same relay. Thus skewing the representation validator nodes are entrusted with via Block Rewards.
As such, I believe the Council should not approve this proposal. Should the Council approve the proposal, the directors and supervisors of the Foundation should refuse to implement the Council’s decision per Statute 4.9:
4.9. The Directors and Supervisors of the Foundation can refuse the Council’s decisions subject to the limitations imposed on them by the law and their duty to steward Pocket Network.
Interesting. I don’t believe constitutionality has been discussed. It didn’t even cross my mind though I worried about this same issue of incentivizing based on stake size rather than relays. We are trying to build something that has a lot of servicers in the end, so this seems pertinent. I am interested in the security if the network convo taking place but this may remind us of how we said we wanted to get to our goal of balance. Thanks for bringing to the conversation.
I generally speaking like this proposal overall, but I have to agree with @iaa12 , this directly ties Block Reward to amount staked, not to relays creating a misalignment of incentives according to the Constitution.
Thank you *iaa12 for bringing this up as a possibility. However, there is nothing at all in this proposal (or in PIP-23) that causes a decoupling of the incentive alignment between validator’s blockchain rewards and number of relays being processed for developers. Consider the following:
(1) The constitution never said that validators could not also wear a “servicers” hat and/or could not earn rewards for work they do while wearing a “servicer” hat in addition to the work they do as a validator. Personally I wish they had; I think it is better for the long-term health and security of the network to force a choice and not let a node wear two hats. Heck, I might even propose that as a PIP. But that is not our current constitution. Need I remind everyone that in the pre-March time frame that vast majority (upward of 80 to 90% ) of a validator-nodes total reward was as a result of earning servicer rewards not proposer rewards.
(2)In all scenarios - meaning current situation, post PIP-22 or post PIP-23 - no matter how much a validator node chooses to stake, his rewards both on the validator side and on the servicer side (and ergo in the aggregate )- are 100% directly proportional to the number of relays being processed for developers. If the total number of relays doubles, their reward doubles. That is what you call a complete and total keeping with both the wording and the spirit of the constitution.
An example of validator reward mechanism that would be found in violation of the constitution and what the foundation is trying to prevent is if we were to start rewarding validators based on number of POKT transactions (adding to POKT stake, removing POKT stake etc) as that could incentivize validators to create a lot of POKT transaction churn in order to self-aggrandize. Such a reward structure would be in violation of the phrase “being processed for developers.”
A second example of a reward mechanism that would be in violation of the constitution is to reward validators via a fixed monthly stipend, since in that case the validators would be perfectly happy if the relays dwindled to nothing as long as they collected their monthly check. Such a reward structure would be in violation of the phrase " tied directly to the number of relays."