Seconded. This whole thread has become PhD level numbers theory.
Iâd be glad to share spreadsheet but it wonât let me upload here. Other options?
Otherwise super easy to put in spreadsheet yourself: Something like the following::
A | B | C | D | E | F | G | |
---|---|---|---|---|---|---|---|
1 | LINEAR WEIGHTING with CAP | ||||||
2 | valiable | value | comment | n | R | P (n) | |
3 | R0 | 40 | current avg reward | 1 | =$B$3*$B$5*$B$6*MIN($E3,$B$7) | =$B$8/$E3*($F3-$B$4) | |
4 | C | 40 | daily infra cost in units of POKT | 1.1 | =$B$3*$B$5*$B$6*MIN($E4,$B$7) | =$B$8/$E4*($F4-$B$4) | |
5 | A | 0.2 | DAO parameter to neutralize inflation effects of consoidation | 1.2 | etc | etc | |
6 | B | 5 | group-behavior infltionary system response to fewer nodes | 1.3 | etc | etc | |
7 | m | 5 | max # 15k units that can be staked to node | 1.4 | etc | etc | |
8 | k | 100 | arbitrary number of 15k quanta to be deployed; makes no difference | 1.5 | etc | etc | |
9 | 1.6 | etc | etc | ||||
10 | P(n) | (output) | total pay a node runner receives in POKT across all nodes after selling off enough POKT to cover infra costs | 1.7 | etc | etc | |
11 | 1.8 | etc | etc | ||||
12 | 1.9 | etc | etc | ||||
13 | NOTE this outputignores any binning/quanitzing effects | 2 | etc | etc | |||
14 | 2.1 | etc | etc |
the above would be for linear capped weighting. To do sqrt weighting instead
cell F3 would become "=$B$3*$B$5*$B$6*sqrt($E3) etc
I was redirected to skynetâŚI added the file there
https://skynetfree.net/JACS6VzGyvJemAeePhHqR8zMPole1_AaxSQr2fhJ_vJttQ
While I will certainly defer the maths to the Phd gigabrains, Iâd add to the logic:
-
Working on a optimization curve brings an issue of forward assumptions about optimal POKT to place on each node. Iâd bring up the issue of unstaking/restaking and the associated 21 days. If the optimal number of POKT to stake is continuously changing, there is an issue around quickly responding to these changing parameters with the 21 day unstake as is. It would lead to 2nd and 3rd order forward guesses about future pricing and unhappy stakers as they have to unstake if wrong.
-
I have less of an issue about spinning up new nodes. One of the fastest periods of growth was in the early part of the year, when prices were up to 20x current prices. $45k nodes seemed not to be a deterrent to spinning up new nodes, and there are multiple pool services that afford an entry into POKT for an investor of any budget.
-
I do not see stake weighting or increased POKT per node as particularly unfair to smaller stakers. One of the reasons we got to the current situation is that when POKT was offering rewards of 180 pokt/day @ $3, no one cared about supply side optimization of infrastructure. People were happy to pay $300/node/month with higher prices. Therefore, in a higher price environment that this proposal should bring, although larger stakers (75k POKT) will indeed benefit slightly more in reduced infra fees overall, smaller stakers are not unduly encumbered by a slightly higher transfer cost. Its akin to when i transfer $100 of USDT to USDC on Binance Ipay a higher percentage fee than if i was a VIP customer transferring $1,000,000.
-
Understood about optimal operating point changing over time but I donât think it big concern because (a) their at lest back squarely in being profitable and optimizing over a 30% vs 35% ROI is vastly different than treading the line of becoming unprofitable and (b) once I start consolidating, that part of the curve is not very steep, meaning that if the âsweet spotâ sits at 60k POKT/node, I get pretty close to the same bang for the buck if I stake 45k or 75k or 90k. So I donât see people fretting or trying to change staked amounts every month⌠just around big macro trends
-
Point taken about pooling service and people shelling out $45k for a node. However, I am looking for the future health of POKT. I would note that (a) there was a huge amount of liquidity in 2021 from cashing in on 20x gains in other tokens that provided a lot of this seed money. When POKT next faces the need to grow and to grow fast in response to app demand, it is very likely to be in much less âeasy moneyâ environment. (b) consolidating node architecture of POKT into the hands of a few big service providers run by a few big pools is antithetical to the decentralized, diversified node deployment POKT ultimately needs.
-
Increasing POKT/node is not unfair. My original point was that if you are going to embed an equation into the code that de facto forces everyoneâs hand to stake 75k, then cut the obfuscation and simply vote to raise the min staked to 75k. At least then the smaller players who canât consolidate will get out of the way immediately instead of enduring a month of daily POKT dropping to 10 or so (due to ValidatorStakeWeightMultiplier being set to unrealistically high value) before quitting due to gross unprofitability.
Hereâs the bottom line (hopefully non-mathematical) of what I am trying to communicate:
In the current proposal there is all gain and no pain to consolidating as much a possible no matter the circumstance. No amount of parameter tweaking can induce someone to stake less than the maximum they can. The node runner paying $5 ot $6 / day to a service provider and the tech-savvy runner who owns, maintains and optimizes their own equipment who pays only 60 cents / day in electricity are both alike pushed toward consolidating. With non-linear weighting (that is when the reward multiplier is less than the amount consolidated), the gain achieved by reducing infra costs strikes a natural balance with the pain of the daily rewards not being as high compared to if I didnât consolidate. So now a node runner with 60k POKT who uses a service provider will naturally consolidate onto a single node. His 160 POKT he used to get awarded daily drops to 80 after consolidating (due to the sqrt weighting), but he is happy because he only has to sell off half instead of all of his award to pay the daily service provider fee and he remains squarely profitable. The tech runner with 60k POKT, on the other hands keeps all 4 of his machines running, pulling in 160 POKT per day since they only have to sell off 20 POKT per day to cover their electricity cost.
I am not advocating to abandon PIP-22. I am advocating to add a DAO-controlled parameter ValidatorStakeFloorMultiplierExponent with valid range (0,1] and initial value 0.5 as follows:
(flooredStake/ValidatorStakeFloorMultiplier)^ValidatorStakeFloorMultiplierExponent
Want to start out with linear weighting: simply set value of ValidatorStakeFloorMultiplierExponent to 1; at least the knob is there for when the the DAO needs it if and when there is need to turn on lots of nodes as quickly as possible or encourage current consolidators to separate back out into smaller-staked nodes
On a completely separate note, I strongly recommend to set the day-one value of ValidatorStakeWeightMultiplier to 1, not to 5 or 4.5 or whatever is being proposed. Then tweak this parameter over the first month in RESPONSE to the unbonding that takes place, not in ANTICIPATION of the unbonding. This provides a more seamless transition period than setting it immediately to anticipated final value.
Why? If you get this parameter wrong on the low side (day-one value of 1) the worst that can happen is a momentary spike in rewards as you play catch-up to unbonding. So letâs say that day-one half of all nodes unbond as they scramble to implement consolidation and this increasing over next few days until 80% of all current nodes have unbonded. First day following avg rewards/node jumps to 80 so DAO adjusts ValidatorStakeWeightMultiplier to 2.0. A few days of 80 or 60 or 55 POKT/node is not enough to cause a big inflationary event that puts downward price pressure on POKT.
On the other hand, setting day-one value to 5 or 4.6 or whatever in anticipation of how many nodes will ultimately unbond could trigger small node runners to unnecessarily quit their node especially if unbonding by the bigger players takes longer than expected⌠as they see their reward immediately plunge to sub 10 POKT/day and induce them to give up their node for good without sticking around for rewards to recover back toward 40/day.
Besides I am not sure this parameter is even needed. Any network effects of unbonding are already fed into the feedback loop of RelaysToTokensMultiplier which will adjust to the new reality of PIP-22 to keep total rewards in line. The only value of this parameter is really to try to smooth out the effects of a massive unbonding event during the first month since the already existing feedback loop operates on a month-or-longer time scale
Thanks @Andy-Liquify for kickstarting this thread as well as everyone elseâs input, including @msa6867 on this proposal. Itâs really impressive. Although I need to spend more time reviewing the math before giving my firm view either way. However, Iâm largely in favour of weighted staking as the most viable approach to increasing network security. And Iâm excited to see this be implemented.
In the meantime, I would like to challenge one small, but consequential assertion:
Pocket is massively overprovisioned in terms of its node capacity, hence the push for lowering the cost of the network via the light client work, etc. My understanding is that about 1,000 nodes could easily handle 1-2B relays per day. As relays start to ramp up further, node runners may have to spin up more full nodes per blockchain that they service, but they shouldnât need to scale up any additional Pocket nodes to do so. And the economics shouldnât incentivise this either. So this point is irrelevant to this conversation thread IMO, at least from a systems perspective, as opposed to a socio-political question around the cost to run a Pocket node in the future, and who gets to access yield from such.
Thanks @Dermot. Thereâs something in the nuance of what you are saying here that seems important but Iâm not grasping yet. Could you explain the difference between âspin up more full nodes per blockchainâ and âscale up additional Pocket nodesâ. Thanks!
Hi all,
Iâve just pushed an update to the proposal as itâs being co-opted by the core team.
Kudos to @Andy-Liquify for a stroke of genius that looks to be making stake-weighted servicing an easy solution.
Kudos also to @msa6867 for the extremely insightful analysis (are you an economist by chance?). Iâve added the exponent parameter at your recommendation. I hope to see your input on the companion proposal that will be needed to specify the values of these new parameters.
@msa6867 As each Pocket node is merely a front to increase the chance of selection to do work, spinning up new pocket nodes is not the blocker when relays start to ramp up. Instead, it is ensuring that your independent blockchain nodes are not getting overloaded (eg spin up more), as well as other more technical points about node running that are outside of my expertise.
For example, my understanding is that 100s of pocket nodes - and potentially much more - send work to only 1 node for each blockchain they support, eg 1 for ETH, Harmony, Polygon, Gnosis, etc. (My numbers are definitely off here).
While this isnât the most urgent priority, maybe the likes of @luyzdeleon @Andrew @BenVan or @Andy-Liquify could provide some colour on the current ratio of full blockchain nodes to pocket nodes? There have been lots of comments across the forum about the cost to run Pocket Network from a systems perspective, but no specific data points on how the incentives to scale nodes horizontally create unnecessary waste without increasing - at least on anything close to a proportional basis - Pocketâs networkâs overall capacity to service relays.
I m a physicist and former aerospace engineer. But economics is where I mainly apply that training these days. Is the companion proposal in the works already or is that later after this passes? While my preferred exponent value is 0.5, it think it may be in the best interest of the community to set it to 1 on day one (keep it at linear weighting) since that is the most understood behavior. I would like to see nonlinear weighting get greater socialization and buy-in from the community before being executed.
Iâm working my way through the 20/6/22 updates and will reply here comments as I go:
not that it matters too much (since how it actually gets coded is what matters), but this formula is still wrong: it still has one too many terms of ValidatorStakeFloorMultiplier and the exponent is in the wrong place (it should not get applied to ValidatorStakeWeightMultiplier). I suggest that what is meant is:
reward = NUM_RELAYS * RelaysToTokensMultiplier * (FLOOR/ValidatorStakeFloorMultiplier)^(ValidatorStakeFloorMultiplierExponent)
/( ValidatorStakeWeightMultiplier)
Second, for values of ValidatorStakeFloorMultiplierExponent close to or equal to 1, a cap on weighting rewards will still be desirable. I see the cap has been removed from reward calculation. And added instead to "Validate Edit Stake?? This, I believe is a mistake for all sorts of reasons which I will enumerate below. Validator behavior can be wholly controlled through the actual reward calculation:
reward = NUM_RELAYS * RelaysToTokensMultiplier * weight
where weight =
min(((FLOOR - (FLOOR % ValidatorStakeFloorMultiplier))/ValidatorStakeFloorMultiplier), MaxValidatorStakeFloorMultiplier)^(ValidatorStakeFloorMultiplierExponent)
/( ValidatorStakeWeightMultiplier)
This quantizes the weight, so no one has incentive to add tokens until they can reach the next level. This also caps the weight so no one has incentive to add more than the maximum that gives them a larger weight.
This keeps it simple. Here are a couple problems with current proposed implementation.
(1) you have to worry about setting ServicerStakeWeightCeiling correctly to accommodate differing slashing-reserve needs. someone sitting at 15.3k wanting to achieve a max 5x weighting and take care of potential slashing events that likewise scale up 5x may want to add 61,5k tokens for a total of 76,5k. So you wouldnât want to set this parameter to 75k for example,
(2) using this approach has the unintentional consequence of preventing someone from adding token who needs to add tokens to maintain prudent slashing reserves after experiencing some slashing events. E.g., if Iâve gotten slashed down to 15,030 and I try adding 200 tokens to avoid falling below 15k in a further slashing event, I will get rejected. Bottom line. Keep it as simple as possible and change the least amount of code in order to avoid introducing unintended consequences
This seems like unecessary state complexity. If one wants linear weighting simply set ValidatorStakeFloorMultiplierExponent to 1, and if one wants nonlinear weighting, choose a value less than 1
This also seems like introducing unnecessary state. If one doesnât want weighting, simply set MaxValidatorStakeFloorMultiplier, ValidatorStakeFloorMultiplierExponent and
ValidatorStakeWeightMultiplier equal to 1, If one wants weighting, set MaxValidatorStakeFloorMultiplier greater than 1.
Then regarding Burn challenge, whether linerar or nonlinear, whether max weight = 1 or greater than 1, pulling the âweightâ value to calculate the correct burn multiplier should be automatic without need for conditional statements based on state.
simply add the weight multiplier to the burn challenge without the need for state. when MaxValidatorStakeFloorMultiplier is set to 1, weight will always take on value 1. Thus this section of code reduces to:
The one possible utility it has is if one is worried about coding the weighted reward wrong and wants a quick switch to throw until the coding gets fixed. But I would argue that adding state and conditional statements into the code introduces more room for bugs and unintended consequences than adding one parameter and one equation to set that parameter.
To summarize my suggested implementation:
weight = CALCULATE_SCALING(flooredStake, MaxValidatorStakeFloorMultiplier, ValidatorStakeFloorMultiplier, ValidatorStakeWeightMultiplier, ValidatorStakeFloorMultiplierExponent)
implemented using the psuedocode given above
Proof lifecycle validation (rewards calculation):
coins = RelaysToTokensMultiplier * totalRelays * weight
Burn Challenge
burnedCoins = RelaysToTokensMultiplier * totalChallenges * weight
Validate Edit Stake
no change to existing code
Am I inputting something wrong?
Shouldnât it be
reward = NUM_RELAYS * RelaysToTokensMultiplier * ((FLOOR/ValidatorStakeFloorMultiplier)/( ValidatorStakeWeightMultiplier))^(ValidatorStakeFloorMultiplierExponent)
Remove ValidatorStakeFloorMultiplier
from the denominator
Before
After
The main thing is to not divide by 15,000 two separate times else youâll get a weighting factor of order 0.0001 rather than in the range 1 to 5 (or whatever max weight is desired).
Since ValidatorStakeWeightMultiplier is a DAO-controlled parameter that only occurs in this one location of the code, it is an implementation choice whether to place it inside or outside the exponent. Just realize that the choice of implementation will affect the value that is chosen. For example, if exponent is 0.5 to do sqrt weighting, and ValidatorStakeWeightMultiplier were implemented outside the exponent, then setting ValidatorStakeWeightMultiplier to 4 would make sense to keep top-bin rewards unchanged in the most likely consolidation scenario, but would be set to 16 to get the same behavior if implemented inside the exponent (since taking the sqrt of 16 gets back down to 4.) Either way is fine, but I think placing outside the exponent makes more intuitive sense and keeps this coefficient acting as a linear (or inverse linear) knob even if the weighting itself is nonlinear
The total inflation of the network under this proposal over a certain time frame can be described with the following summation. (P.S. I am including it for concreteness, clarity, to minimize confusion, and because of msas precedent)
Where each term in the summation describes the rewards of a single node in the network denoted by a.
Since RelaysToTokensMultiplier and ServicerStakeWeightCeiling are constants, by summation rules we can factor them out the sum.
On larger time frames, the relays performed by a node per session becomes equal across the network, so for this example, we can also factor out Relays from the sum.
Because of the properties of a summation,
We also have,
Substituting we have
This has some serious implications. Firstly, this means that in order for the networkâs inflation to remain the same, the network would have to fully consolidate into nodes that meet the weight ceiling. (I need to integrate the exponentiation which I will do, but I just wanted to get this out there. This effect still stands, but to a lesser degree.). After PIP-22 were to be implemented, the network rewards and inflation would be reduced by 15000/ServicerStakeWeightCeiling. This would make node running unprofitable for many, including small node runners.
MSA alluded to this as well.
Furthermore, a nodeâs rewards become dependent on the consolidation of the rest of the nodes on the network.
The probability of being selected into a session under the current session selection algorithm is as follows
If PIP-22 were to pass and the amount of pokt staked on the network were to remain the same, averageStakeFloor could describe the number of nodes on the network.
Substituting this back into the session probability, we have
So, the rewards of a node for a time frame is
Substituting P(Session),
This describes the total rewards of a node, which is not a great representation of this proposal since what ultimately matters is rewards per pokt staked. So, to glean this value, we can divide NodeReward by StakeFloor
Doing this, we have
The probability of session selection would increase if the network were to consolidate and the AverageStakeFloor were to increase. Relays per session would increase as well.
This equation indicates that the rewards of a node are dependent on probability of session selection, which is dependent on the amount of consolidation of the rest of the network. And light-client, which makes the additional costs of more servicers marginal and removes the need for an unstake, significantly diminishes this incentive to consolidate. The network as a whole has an incentive to consolidate since this would increase the AverageStakeFloor and thus the rewards; however, individual node runners using light client donât have an incentive to consolidate because it would have a marginal effect on AverageStakeFloor.
So in summary, PIP-22 would lead to an unsustainable reduction in rewards instantenously, introduce an inability to return to previous reward levels and a dependency on other node runners to consolidate to return to previous reward levels (who might be using light client and will not have an incentive to consolidate). MSA describes this in his post on this proposal where C is near zero.
So, in order to remedy the aforementioned issues.
- You could gradually increase the ServicerStakeWeightCeiling parameter; however, this wouldnât make a ton of sense as it would require multiple consolidations.
- You could increase the validator incentive to encourage light client users to consolidate, but this would require an additional GOODVIBES-esque discussion
- Modify the poktperrelay parameter in accordance with the averagestakefloor (I need to think more about this)
There is a notion that this proposal achieves the same thing as vanilla stake weighting, but the externalities above indicate that this is not the case.
I think that we should implement stake weighting as originally proposed in PIP-23 (with some small modifications). Stake weighting preserves rewards and ensures fairness across the whole network, bolsters the validator set tremendously, and doesnât introduce reward reductions, complexity, and uncertainty. If we are already going to undergo a consensus change for PIP-22, we should do it the right way and implement vanilla stake weighting.
Hereâs the google doc link if youâd like to have better views of the equations.
Thanks @Addison. I will work through the reply and accompanying doc. At first blush I do not see how how aggregate rewards drops with PIP-22 if ValidatorStakeWeightMultiplier is initially set to 1.0 and is only adjusted upward during the first month in response to aggregate consolidiation effects.
Agreed that weighting the probability of being selected as a servicer of a relay has much going in its favor as opposed to weighting the reward per relay. It is theoretically cleaner and hs the main benefit of decoupling an individualâs decision to consolidate or not from the behavior of other validators. However, it would also be a fundamental shift in the direction and future of pocket and according to many of the responders would be more difficult to implement. Thus for speed of action to bring immediate relief to validators trying to survive the current bear market and doing so without changing the democratic underpinnings of servicer selection, I favor passing PIP-22.
Forcing a graduated consolidation is not a bad idea. Is the network really that fragile that it would strain it to have multiple episodes of consoliaion. Eg., DAO could publish the change of PIP-22 and indicate that day one MaxValidatorStakeFloorMultiplier will be set to 2. And each month it will be increased by at most +1 subject to DAO review of how the network as a whole and economic conditions across various classes of validators has responded to the consolidation up to that point. That being said, I think we are also fine immediately setting it to 5. But let me review what youâve writtenâŚ
Thanks, looking forward to your feedback.
Its possible to do this, but it would be extremely slow and require many consolidations. Say you have 10 nodes, 150500 pokt. If the parameter was set to 1.25, you could unstake your nodes and restake it onto 8 nodes. In the meantime, the networkâs rewards would be reduced by 25%, which is still significant. Then, after everyone consolidates to bring the rewards back to equilibrium, should the parameter be slightly raised again, and then have everyone reconsolidate again? Iâm not sure if there is an elegant way to do this.
Why do you think it causes a shift? People definitely think its difficult to implement, but I believe that it is very doable.
This would require everyone to unstake repeatedly though.
Sounds good. Looking forward. Bit nervous . Also, are you on discord/TG? Iâm Addi#0007 and @thunderheadotc.
I have not joined the POKT discord room or TG group yet but need to⌠fairly new to POKT and got wind from discord poktpool that PIP-22 looked to be coming to a vote, so I made my way here to take a look.
Umm, mixing up parameters here I think?. Remember the discussions from earlier⌠with linear weighting (which I think we should go with on day 1 until we study the behavior of nonlinear weighting better), I am incentivized to consolidate to the maximum amount I can, limited only by my individual resources and MaxValidatorStakeFloorMultiplier; In this scenario ValidatorStakeWeightMultiplier, does not influence validator behavior. Whether set to 1 or 1.25 or 2 or 4 or 5, I am most profitable if I immediately consolidate to 5x or 8x or whatever max I am allowed. Setting it to 1 means the ones who canât consolidate wont see a drop in rewards while those who can consolidate will see a boost of rewards. Setting it higher will proportionally dial down rewards for everyone - both big and small - to keep the transition period from being too inflationary.
Agreed. Iâm not recommending it. Just saying itâs possible if somehow there was big concern over immediately setting max to a value like 5 or 8
Think for example of the debate going on in the ETH community on the same subject regarding the shift to POS. By tying probability of being selected as validator to the amount staked you risk concentrating validator decision making into the hands of too few big players which opens up a network to security issues related to the big validators colluding with each other to self-aggrandize at the expense of others. That is why I call it a fundamental philosophy/policy shift from pure-random selection.