PIP-22: Stake-weighted Servicer Rewards

I think I miss interpreted @luyzdeleon response previously. I’d agree with your response here that this is not really strictly necessary to implement given the current APY, time to auto compound and the 21 day lockin it would impose. A blocker on edit stake and the fee “burning” this would result in if not at the next bin should be a big enough deterrent.

I will add dissenting opinions to the OP :slight_smile: Thanks for the suggestion Shane.

1 Like

@Andy-Liquify The only thing I would add here is the creation of 2 new DAO parameters:

  1. ValidatorStakeFloorMultiplier which would allow the DAO in any given scenario the power to modify this to something other than MinStake (even if for the time being what’s being proposed is to use the same number as MinStake). This introduces more flexibility in the future to move this reward formula around based on different circumstances. The formula would change to:

reward = NUM_RELAYS * RelaysToTokensMultiplier * ((FLOOR/ValidatorStakeFloorMultiplier)/(X*ValidatorStakeFloorMultiplier))

  1. ValidatorStakeWeightMultiplier which would allow the DAO to control X in your formula, in the case of the opposite effect (make the reward inflationary) would be the desired outcome.

In regards with the potential incentive to Edit Stake on every session creating block transaction bloat, I believe as long as properly communicated and making the “tier gaps” big enough between themselves should prove enough counter-incentive to avoid block bloating.

The only last question I would add to this proposal is wether or not to implement a ceiling here where the consolidation would stop, giving the DAO an effective parameter to control consolidation ratio in the case individual nodes are struggling to process the traffic being sent to them. In this case I propose you include a 3rd parameter called ServicerStakeWeightCeiling indicating an absolute maximum amount of POKT after which the rewards become the same. This creates a “minimum validator stake” baseline which would incentivize to increase the minimum stake of each validator to be at least this ceiling, giving the DAO a new incentive mechanism to secure the network alongside the rest of economic policy being discussed in the Good Vibes and Phase Plan proposals (and any others that could come along).

Thank you for putting so much work and care on this proposal.

3 Likes

Thanks for the input @luyzdeleon! will add the new suggested params in the OP

(for context before I update the OP) This is effectively controlled by X in my OP as

Min(stake.Sub(stake.Mod(minStake),(X * MinStake))

This results in a max weight of no greater than 1.

But makes sense to decouple these from min stake and add the additional DAO parameters you suggested for greater flexibility. This allows the model to be both deflationary and inflationary

2 Likes

Made changes to the OP based on @luyzdeleon suggestions

1 Like

@Andy-Liquify What do you think about revising the current challenge burning mechanism to scale with the same reward formula? For reference see the following function:

If this is not updated alongside this proposal, nodes would see disproportionately smaller consequences to potential malfunction (by malice or negligence) impacting the network’s overall quality of service.

2 Likes

@luyzdeleon I’d love to get your input on PIP-23 as far feasibility and implementation. Seems like it would be a simpler approach to the weighted staking problem, but I’m sure there’s things I’m not considering. Some feedback would be appreciated.

1 Like

Good suggestion @poktblade also flagged this above which I forgot to reply too.

Could do something like this:

# 1. Grab validator data
Validator := k.GetValidator(ctx,address)
# 2. Grab staked amount
Stake := validator.getTokens()
# 3. Floor the staked amount to the lowest multiple of ValidatorStakeFloorMultiplier or the ServicerStakeWeightCeiling, which ever is smaller
flooredStake := Min(stake.Sub(stake.Mod(ValidatorStakeFloorMultiplier)),(ServicerStakeWeightCeiling))
# 4. Calculate Slash weight/ bin
weight = flooredStake.div(ValidatorStakeFloorMultiplier)
# 5. calculate the coins to slash
coins := k.RelaysToTokensMultiplier(ctx).Mul(challenges).Mul(weight)

this will weight the coins to slash linearly depending on the bin your in ranging from 1-ServicerStakeWeightCeiling/ValidatorStakeFloorMultiplier, WDYT?

2 Likes

I believe this to be an appropriate solution!

Thanks for the quick reply.

2 Likes

Sorry, quick question. I’m trying to make sense of this, so I wrote a JS Fiddle to try and simulate the code proposed here

In short, I am starting with 16 nodes staked at 15k by 4 entities:
AAA BBBBBBBBB C DDD

Assuming RelaysToTokensMultiplier of 1 for simplicity, if I run 1 million relays through these nodes, 100 relays at a time, the earnings are as you would expect:

{A: 187500,B: 562500,C: 62500,D: 187500}

If we consolidate the nodes up to 75,000, we get the following node distribution:

A(45K) B(75K) B (60K) C(15K) D(45K)

If I run 1 million relays through these consolidated nodes, 100 relays at a time, using the algorithm above, the earnings are:

{A: 120000,B: 360000, C: 40000, D: 120000}

Would you mind taking a look at the fiddle to see if I’m missing something , or if I botched the implementation, or misunderstood the formula. We’re basically missing 360,000 tokens.

Hi @iaa12 your implementation looks correct but understanding is out here. Like I mentioned above weighting it from the top down using the above variables values will make it deflationary unless you have 75k staked to a node (weight = 1, therefore it is the same as before). The reason the numbers are different for B before and after is that you have gone from 9/16 relays (56.5% of all relays to B) to just 2/5 (40% of all relays to B) relay for relay in you example B will be the same before and after.

ValidatorStakeWeightMultiplier can be adjusted to even out these numbers. In your example setting it to 3.2 and having servicerStakeWeightCeiling of 75k will result in the same numbers before and after.

5 was selected to ensure it was deflationary from the beginning. Once we get an understanding of how nodes are weighted in the field it can be adjusted by a PUP

1 Like

Great, thank you for reviewing that, I figured I was missing something. A few follow-up questions then. How was this 3.2 determined, do we have a formula for this calculation? Am I correct in understanding that this value will be dependent on the state/configuration of the network at a point in time, and may drift in time as the network changes(nodes consolidate, stake, unstake etc.)? Will a monitoring and adjustment mechanism be implemented in order to maintain rewards over time?

So this comes from the average bin, in your case it is as follows

A(45K) B(75K) B (60K) C(15K) D(45K)

A = 3
B1 = 5
B2 = 4
C = 1
D = 3

Average = (16/5) = 3.2

You could automate this but it will add sufficient bloating as you have to keep track of the bin values for all nodes and update the average at each stake/edit of a validator. Hence why I suggested having the weighted controlled by DAO Params (reduces alot of the complexity) that can be adjusted based on the distribution.

Can you indicate whether this would impact rewards for those node runners who’ve spent considerable effort and cost to build optimised high speed nodes please.

For example, would a node runner staking 30k on one node that has very high latency recieve 2 tmes the reward of a node staking 15k that has very low latency?

i.e. are we throwing the concept of rewarding performant nodes out of the window in favour of well capitalised nodes?

Session selection would be unchanged, meaning nodes will enter the same amount of sessions as they did before regardless of stake size (actually it would be more since there’d be fewer nodes after consolidation happens).

Once nodes are in sessions, they’d still be selected proportionally based on latency per the cherrypicker algorithm, which is again independent of stake size.

What this change does is apply a multiple to the rewards that the nodes are earning. Performant nodes with less stake would be earning proportionally less tokens per relay, however, they’d also be doing proportionally more relays since with whales consolidating nodes there’ll be less nodes to compete with for session selection.

3 Likes

Thanks for answering so quickly. It is clear then, that staked balance aside, the optimum strategy would be increase the latency of my nodes if this means I can increase the number of chains I can connect to my pocket node.

I’ll get paid just as much, whether my latency is 1000ms or 10ms.

It appears that if this proposal is implemented:

  • Performance of the entire network is likely to degrade considerably and the pocket network will be known as a low speed service
  • Node runners who’ve invested in high powered, well optimised systems will have wasted that time and expense and would have been better to setup infra on the lowest spec possible

Your understanding is wrong here, like @JackALaing mentioned this proposal has no affect on the cherry picker or session selection!. So faster nodes will still be given more relays (nothing has changed there) just the reward for each relay scales depended on amount staked.

1 Like

I’ve little interest in counting how many relays my nodes are doing if I’m not being paid for it.

I’d be overjoyed to be told that my understanding is wrong, and my nodes will continue to be rewarded - but it appears that the only nodes being rewarded are those with the highest amount of staked pokt.

I’d suggest that additional thought is applied to ensure we continue to incentivise a high speed performant, decentralised network as considerable reputational risk is at stake if we pivot to a network that simply rewards well capitalised node runners.

I am not against stake weighting as a principle, as long as it is balanced with the need to incentivise performant and decentralised nodes.

A solution could be to measure the amount of relays supplied, and use this in the rewards mechanism so that 50% of the reward distribution is based on performance, and 50% is based on stake weighting. To move forward with a shift from the as is (100% based on performance and 0% consideration of staked balance) to a complete opposite of 0% consideration of work done and 100% consideration of stake is innapropriate and a knee jerk reaction in the extreme.

You will still get paid for them, not sure what is making you think otherwise.

Your understanding is still out here like I mentioned faster nodes will still be prioritized by the cherry picker nothing has changed there. A node which is faster will still get more relays the only thing different is that a weight is applied based on stake which will vary the reward for each relay served.

Thank you, I’d focussed on the detail of how to calculate the weighting based on stake, and completely missed the proposed algo

reward = NUM_RELAYS * RelaysToTokensMultiplier * ((FLOOR/ValidatorStakeFloorMultiplier)/( ValidatorStakeWeightMultiplier*ValidatorStakeFloorMultiplier))

Thank you for patiently explaining this. I was concerned that whilst the session would still be run by the fastest nodes that are pseudo selected, the rewards would in turn be completely agnostic of the relays performed and be based purely on the stake. But I can see they are not completely agnostic of that and continue to consider this in the rewards (albeit alongside stake).

I withdraw my concerns and support this proposal.

1 Like

First. I’m sure the implantation will be fine, but just pointing out that the math in the actual (updated) description is off in that it has a square of ValidatorStakeFloorMultiplier in the denominator whereas it should be inverse linear with that term: should be reward = NUM_RELAYS * RelaysToTokensMultiplier * ((FLOOR/ValidatorStakeFloorMultiplier)/( ValidatorStakeWeightMultiplier))

Second, I see the suggestion of DOA controlled parameter: ServicerStakeWeightCeiling but this value can be derived from other parameters and thus does not need to be separately controlled (or named), or am I missing something?

Third, just pointing out the obvious that this proposal in its modified form is almost indistinguishable, in terms of network and tokenomic effects, to simply raising the min staked to 75k. The vast, vast majority of node runners will consolidate to 75k nodes. The only added benefit to this proposal vs 75k min nodes is to “allow” for smaller players to run a node with les than 75k POKT while stripping almost all incentive to do so. Is that what people are wanting to vote to do???

[remainder deleted]…

3 Likes