PIP-22: Stake-weighted Servicer Rewards

Ok I’ve been dreaming about this (sad I know) but I think to get it working the way I originally intended is going to be difficult it requires you to set a saddle point in the middle and then have the bins left and right of that saddle point compliment each other out to result in net 0. This is totally achievable but it adds state bloating and will require smaller bins to have the granularity needed.

What did come to me though which may have merit! Would be to weight them as weight/numBins effectively setting the max bin to be the same as it is currently (RelaysToTokensMultiplier) this will result in it always being deflationary from were we are now. If anything it is effectively like upping the min stake but without completely penalising smaller runners and allowing people to consolidate nodes (again with only a few lines of code changing) it will also still compliment Shane’s PUP17 by increasing validator threshold above X faster.

1 Like

I’m dropping my proposal in favor of this approach. I think the complexities are easier to work out on this one.

3 Likes

I have made some alterations to the OP to match my comments from yesterday. Have changed the weighting to be against the top bin and decrease linearly to 1/X being the bottom bin. This solves the issues spotted by @addison (and insures it is the same/deflationary from were we are now as opposed to inflationary with the previous approach). Whilst still preserving the “simpleness”. Like I mentioned it is effectively now like increasing the minimum stake but without completely shutting down smaller runners.

3 Likes

This seems like a great approach, very small code change that will have the desired effect and with less collateral damage than other approaches.

If the policy ends up being considerably deflationary overall it could be balanced by increasing RelaysToTokensMultiplier slightly so the overall effect on mint rate is 0.

I have introduced PIP-23 -Stake Weighted Algorithm for Node Selection (SWANS) which aims to create a simple node selection algorithm based on stake amount. It is only a few lines of code, easy to understand, maintains no state, and preserves the pseudorandom nature of the current cherry picker - it preserves the current logic in its entirety, as it’s only a transformation to the list of servicer nodes. Any feedback welcome.

Thanks for the updates to make this proposal better! I’d suggest one more modification:

I’d add a section to address this concern of too many edit stake transactions hitting the network, even if there isn’t enough POKT to hit the next ceiling.

@luyzdeleon mentioned this potential bloat to the blockchain in the other weighted stake thread. However doing regular edit-stake transactions all the time, when you don’t have enough POKT to hit the next threshold could be a very expensive automation.

POKT servicer APR is currently around 25%. If someone where to do an automated edit-stake transaction after every session, they would have wasted over 350 POKT before there would be enough POKT to hit the next threshold. I highly doubt the network as a whole would waist that much POKT on an automation per node like that.

This automation also would mean that all their rewards are locked up in their node… so if anytime in the next 4 years (with a 25% APR) they wanted to pull out some of their rewards, they would have to unstake their entire node for 3 weeks.

@Andy-Liquify I’d suggest putting a dissenting opinions section and laying out the concern and address it.

2 Likes

I think I miss interpreted @luyzdeleon response previously. I’d agree with your response here that this is not really strictly necessary to implement given the current APY, time to auto compound and the 21 day lockin it would impose. A blocker on edit stake and the fee “burning” this would result in if not at the next bin should be a big enough deterrent.

I will add dissenting opinions to the OP :slight_smile: Thanks for the suggestion Shane.

1 Like

@Andy-Liquify The only thing I would add here is the creation of 2 new DAO parameters:

  1. ValidatorStakeFloorMultiplier which would allow the DAO in any given scenario the power to modify this to something other than MinStake (even if for the time being what’s being proposed is to use the same number as MinStake). This introduces more flexibility in the future to move this reward formula around based on different circumstances. The formula would change to:

reward = NUM_RELAYS * RelaysToTokensMultiplier * ((FLOOR/ValidatorStakeFloorMultiplier)/(X*ValidatorStakeFloorMultiplier))

  1. ValidatorStakeWeightMultiplier which would allow the DAO to control X in your formula, in the case of the opposite effect (make the reward inflationary) would be the desired outcome.

In regards with the potential incentive to Edit Stake on every session creating block transaction bloat, I believe as long as properly communicated and making the “tier gaps” big enough between themselves should prove enough counter-incentive to avoid block bloating.

The only last question I would add to this proposal is wether or not to implement a ceiling here where the consolidation would stop, giving the DAO an effective parameter to control consolidation ratio in the case individual nodes are struggling to process the traffic being sent to them. In this case I propose you include a 3rd parameter called ServicerStakeWeightCeiling indicating an absolute maximum amount of POKT after which the rewards become the same. This creates a “minimum validator stake” baseline which would incentivize to increase the minimum stake of each validator to be at least this ceiling, giving the DAO a new incentive mechanism to secure the network alongside the rest of economic policy being discussed in the Good Vibes and Phase Plan proposals (and any others that could come along).

Thank you for putting so much work and care on this proposal.

3 Likes

Thanks for the input @luyzdeleon! will add the new suggested params in the OP

(for context before I update the OP) This is effectively controlled by X in my OP as

Min(stake.Sub(stake.Mod(minStake),(X * MinStake))

This results in a max weight of no greater than 1.

But makes sense to decouple these from min stake and add the additional DAO parameters you suggested for greater flexibility. This allows the model to be both deflationary and inflationary

2 Likes

Made changes to the OP based on @luyzdeleon suggestions

1 Like

@Andy-Liquify What do you think about revising the current challenge burning mechanism to scale with the same reward formula? For reference see the following function:

If this is not updated alongside this proposal, nodes would see disproportionately smaller consequences to potential malfunction (by malice or negligence) impacting the network’s overall quality of service.

2 Likes

@luyzdeleon I’d love to get your input on PIP-23 as far feasibility and implementation. Seems like it would be a simpler approach to the weighted staking problem, but I’m sure there’s things I’m not considering. Some feedback would be appreciated.

1 Like

Good suggestion @poktblade also flagged this above which I forgot to reply too.

Could do something like this:

# 1. Grab validator data
Validator := k.GetValidator(ctx,address)
# 2. Grab staked amount
Stake := validator.getTokens()
# 3. Floor the staked amount to the lowest multiple of ValidatorStakeFloorMultiplier or the ServicerStakeWeightCeiling, which ever is smaller
flooredStake := Min(stake.Sub(stake.Mod(ValidatorStakeFloorMultiplier)),(ServicerStakeWeightCeiling))
# 4. Calculate Slash weight/ bin
weight = flooredStake.div(ValidatorStakeFloorMultiplier)
# 5. calculate the coins to slash
coins := k.RelaysToTokensMultiplier(ctx).Mul(challenges).Mul(weight)

this will weight the coins to slash linearly depending on the bin your in ranging from 1-ServicerStakeWeightCeiling/ValidatorStakeFloorMultiplier, WDYT?

2 Likes

I believe this to be an appropriate solution!

Thanks for the quick reply.

2 Likes

Sorry, quick question. I’m trying to make sense of this, so I wrote a JS Fiddle to try and simulate the code proposed here

In short, I am starting with 16 nodes staked at 15k by 4 entities:
AAA BBBBBBBBB C DDD

Assuming RelaysToTokensMultiplier of 1 for simplicity, if I run 1 million relays through these nodes, 100 relays at a time, the earnings are as you would expect:

{A: 187500,B: 562500,C: 62500,D: 187500}

If we consolidate the nodes up to 75,000, we get the following node distribution:

A(45K) B(75K) B (60K) C(15K) D(45K)

If I run 1 million relays through these consolidated nodes, 100 relays at a time, using the algorithm above, the earnings are:

{A: 120000,B: 360000, C: 40000, D: 120000}

Would you mind taking a look at the fiddle to see if I’m missing something , or if I botched the implementation, or misunderstood the formula. We’re basically missing 360,000 tokens.

Hi @iaa12 your implementation looks correct but understanding is out here. Like I mentioned above weighting it from the top down using the above variables values will make it deflationary unless you have 75k staked to a node (weight = 1, therefore it is the same as before). The reason the numbers are different for B before and after is that you have gone from 9/16 relays (56.5% of all relays to B) to just 2/5 (40% of all relays to B) relay for relay in you example B will be the same before and after.

ValidatorStakeWeightMultiplier can be adjusted to even out these numbers. In your example setting it to 3.2 and having servicerStakeWeightCeiling of 75k will result in the same numbers before and after.

5 was selected to ensure it was deflationary from the beginning. Once we get an understanding of how nodes are weighted in the field it can be adjusted by a PUP

1 Like

Great, thank you for reviewing that, I figured I was missing something. A few follow-up questions then. How was this 3.2 determined, do we have a formula for this calculation? Am I correct in understanding that this value will be dependent on the state/configuration of the network at a point in time, and may drift in time as the network changes(nodes consolidate, stake, unstake etc.)? Will a monitoring and adjustment mechanism be implemented in order to maintain rewards over time?

So this comes from the average bin, in your case it is as follows

A(45K) B(75K) B (60K) C(15K) D(45K)

A = 3
B1 = 5
B2 = 4
C = 1
D = 3

Average = (16/5) = 3.2

You could automate this but it will add sufficient bloating as you have to keep track of the bin values for all nodes and update the average at each stake/edit of a validator. Hence why I suggested having the weighted controlled by DAO Params (reduces alot of the complexity) that can be adjusted based on the distribution.

Can you indicate whether this would impact rewards for those node runners who’ve spent considerable effort and cost to build optimised high speed nodes please.

For example, would a node runner staking 30k on one node that has very high latency recieve 2 tmes the reward of a node staking 15k that has very low latency?

i.e. are we throwing the concept of rewarding performant nodes out of the window in favour of well capitalised nodes?

Session selection would be unchanged, meaning nodes will enter the same amount of sessions as they did before regardless of stake size (actually it would be more since there’d be fewer nodes after consolidation happens).

Once nodes are in sessions, they’d still be selected proportionally based on latency per the cherrypicker algorithm, which is again independent of stake size.

What this change does is apply a multiple to the rewards that the nodes are earning. Performant nodes with less stake would be earning proportionally less tokens per relay, however, they’d also be doing proportionally more relays since with whales consolidating nodes there’ll be less nodes to compete with for session selection.

3 Likes