PIP-22: Stake-weighted Servicer Rewards

Updated 20/06/22 by @JackALaing to make the proposal co-authored by Andy, myself, and @luyzdeleon. Our amendments include integrating our own motivations, adding more specification details, removing specified parameter values from the proposal, and adding the core dev team’s recommended implementation details.

Updated 13/06/22 to include @luyzdeleon 's suggested changes: Addition of DAO configurable parameters and decoupling of max weight from 1 using ServicerStakeWeightCeiling

Updated 13/06/22: Have changed the weighting to be against the top bin and decrease linearly to 1/X being the bottom bin. This solves the issues spotted by @addison (and ensures it is the same/deflationary from where we are now as opposed to inflationary with the previous approach). Whilst still preserving the “simpleness”.


  • Author(s):
    • @Andy-Liquify: the founder of Liquify, an infrastructure as a service company catering to projects and institutions. I have over 10 years experience in developing safety critical software.
    • @luyzdeleon: CTO, Pocket Network Inc.
    • @JackALaing: CGO, Pocket Network Inc.
  • Implementer(s): Liquify team with support from pocket core team
  • Category: Protocol Upgrade


Enable weighted staking by scaling per-relay rewards according to the stake size of a servicer.

Background / Motivation

There are a number of proposals (formal in the forum and informal in chat) that aim to consolidate the node count in order to reduce the network’s operating costs. The majority of these have trade-offs and known unknowns:

  1. Uses Negative Incentives: e.g. increase StakeMinimum to force everyone to consolidate – the number of single node runners is anticipated to be in the thousands and it’s unknown how they’d react to being told that the rules of entry have changed
  2. Diminishes Servicer Incentives: e.g. increase ProposerAllocation to incentivize optimizing to be a validator rather than a servicer, since validators are stake-weighted – it’s unknown how many would continue maintaining their backend RelayChain nodes if/when they realize that producing blocks is more profitable than servicing relays and their backend RelayChain nodes are now the majority of their operating expense, and thus it’s unknown the 2nd-order effects this could have on the quality of Pocket’s service
  3. Creates More Inefficiencies: e.g. stake-weighted servicer selection, meaning that larger staked servicers are selected more often into sessions and ultimately receive more relays – it’s unknown the 2nd-order effects this could have on block processing and node resources, since it involves making the session generation algorithm more complicated and session generation is currently a bottleneck in block processing. We’d increase individual node operating expenses in the process of reducing total node counts, bringing us back to square one.

When these debates first started, some core team members gravitated towards stake-weighted servicing since it didn’t introduce either of the first two trade-offs - 1) it is a positive incentive that rewards consolidation rather than penalizing non-consolidation, 2) it reinforces servicer incentives. However, our enthusiasm for this solution was dampened by the 3rd category of trade-off, since we knew that complicating the session generation algorithm was a big undertaking that could make operating costs more expensive again.

@Andy-Liquify’s proposal took a creative approach to stake-weighting which doesn’t touch the session generation algorithm and instead scales per-relay rewards according to stake. In other words, we had assumed the only way to stake-weight was on the front-end (more sessions, more relays), but Andy introduced the notion that we can stake-weight on the back-end (more rewards per relay). And it seems to be a lot simpler computationally.

We’ve spent the last week discussing this proposal internally, getting Andy’s consent to become co-authors, and now here we are with an amended PIP-22 for your consideration.


Instead of stake-weighting session selection, which adds state bloat, we can stake-weight rewards according to the formula below. The stake-weighted rewards formula should be parameterized to allow for the DAO to calibrate incentives.

reward = NUM_RELAYS * RelaysToTokensMultiplier * ((FLOOR/ValidatorStakeFloorMultiplier)/( ValidatorStakeWeightMultiplier*ValidatorStakeFloorMultiplier))^(ValidatorStakeFloorMultiplierExponent)

New DAO ACL parameters:

  1. ValidatorStakeFloorMultiplier - This corresponds to the bin widths, i.e. the size of the stake increments, or number of extra tokens required to increase the stake;
  2. ValidatorStakeWeightMultiplier - This is the maximum multiple/divisor allowed to scale the rewards;
  3. ServicerStakeWeightCeiling - This sets the upper limit for consolidation;
  4. ValidatorStakeFloorMultiplierExponent - Defines the linearity of the stake-weighting curve, with 1 being linear and <1 non-linear.

These parameters are new economic levers that the DAO can pull to refine the incentives of servicer consolidation.

For example, if servicers are spamming edit stake transactions to compound as quickly as possible, we can increase the ValidatorStakeFloorMultiplier (perhaps paired with an increased transaction fee). As another example, if the DAO wants to more aggressively incentivize consolidation, it can set ValidatorStakeFloorMultiplierExponent closer to 1, whereas if the DAO wants to encourage more horizontal scaling (or equalize rewards for the single node runners) it can set the param closer to 0.

Example If ValidatorStakeWeightMultiplier = 5, ValidatorStakeFloorMultiplier = 15000, ServicerStakeWeightCeiling = 5 * 15000 = 75k, and ValidatorStakeFloorMultiplierExponent = 1 :

| Stake  | Floored Value | Reward Rate |
| 15000  | 15000         | 1/5         |
| 17200  | 15000         | 1/5         |
| 30000  | 30000         | 2/5         |
| 45000  | 45000         | 3/5         |
| 60000  | 60000         | 4/5         |
| 75000  | 75000         | 5/5         |
| 150000 | 75000         | 5/5         |

Feature Flags

In order to fully realize the configurability of this functionality, new feature flags would need to be implemented:

  1. RSCAL (Reward Scaling): This flag would tell the protocol that reward scaling is active when on and to use the formulas proposed. When off, this would go back to the original formula of coins = RelaysToTokensMultiplier * Relays. The idea behind this feature flag is that if any issues arise with the mechanism overall, the whole mechanism can be disabled and the network can fallback into the already known behavior as a default in case of crises and/or unknown unknowns.
  2. VEDIT (Validate Edit Stake): If RSCAL is true, this flag would activate the edit stake validation feature described below, meaning that edit stake transactions would be invalid if they do not cause the stake size to reach the next bin. This will give the DAO an emergency fallback to counteract edit stake spamming.

State transition rules

The below pseudo-code is meant to illustrate the business rules of the state transitions that are impacted by this change, the Implementers (PNI core devs) have full discretion on the actual implementation as long as it achieves the same results being illustrated.

Proof lifecycle validation (rewards calculation)

Entrypoint function: https://github.com/pokt-network/pocket-core/blob/8ad860a86be1cb9b891c1b6f244f3511d56a9949/x/nodes/keeper/reward.go#L11

    // Calculate bin
    flooredStake = MIN(nodeStake - (nodeStake % ValidatorStakeFloorMultiplier), ServicerStakeWeightCeiling)

    // Calculate weight
    weight = (flooredStake/ValidatorStakeFloorMultiplier)^(ValidatorStakeFloorMultiplierExponent)/(ValidatorStakeWeightMultiplier)

    // Calculate coins based on weight
    coins = RelaysToTokensMultiplier * totalRelays * weight
    coins = RelaysToTokensMultiplier * totalRelays

Validate Edit Stake

Entrypoint Function: https://github.com/pokt-network/pocket-core/blob/de8ec8c46cdcf17606b1491e2fca9f8ea3f7d716/x/nodes/keeper/valStateChanges.go#L197

In this case, the following pseudocode doesn’t replace the existing validations, but it’s added to them. The goal with this state transition is only allow stake increases when a node is actually increasing their stake to the next bin, up to ServicerStakeWeightCeiling, if the new stake amount is higher than that, then allow the transaction to go through.

    // We only enforce if someone is trying to stake within the limits of reward scaling
    IF newStakeAmount < ServicerStakeWeightCeiling
        // Calculate the bin in which the new stake would fall under
        flooredStake = newStakeAmount - (newStakeAmount % ValidatorStakeFloorMultiplier)
        IF flooredStake < currentStakeAmount
            // Throw invalid edit stake error

We want to include the option to protect the Validate Edit Stake functionality because in practice we’ve seen the economic incentive not to be enough of a deterrent for Node Runners to want see state transitions play out as soon as possible to earn as much as an advantage as possible given that there’s limited block space to accommodate new node configurations.

This feature will be flagged and deactivated to begin with. If the incentives alone are not enough to prevent edit stake spamming, this feature can be activated. Note that activating this feature introduces the trade-off of making it harder to do slash-prevention top-ups, which may necessitate a revisit of edit stake logic.

Burn Challenge

Entrypoint function: https://github.com/pokt-network/pocket-core/blob/98a12e0f1ecb98e40cd2012e081de842daf43e90/x/nodes/keeper/slash.go#L15

In this case we want to scale slashing for servicing related offenses (replay attempts at proofs and client side challenges) in regards to weight when reward scaling is active

    // Calculate bin
    flooredStake = MIN(nodeStake - (nodeStake % ValidatorStakeFloorMultiplier), ServicerStakeWeightCeiling)

    // Calculate weight
    weight = (flooredStake/ValidatorStakeFloorMultiplier)^(ValidatorStakeFloorMultiplierExponent)/(ValidatorStakeWeightMultiplier)

    // Calculate coins based on weight
    burnedCoins = RelaysToTokensMultiplier * totalChallenges * weight
    burnedCoins = RelaysToTokensMultiplier * totalChallenges


We covered some of this in the Background section but here’s a cost-benefit summary:


  • Consolidation of nodes = reduced cost to run the network
  • (Relatively) simple change
  • Easy to calibrate by adjusting the new DAO params
  • Positive side-effect of increasing the minimum validator stake, since servicer consolidation = validator consolidation


  • Nodes don’t get paid to do the same work
  • Harder to track rewards – but Andy suggests that we could just shift the metric to be relay based not reward based (or some changes to indexers)

Dissenting Opinions

The dissenting opinions in the previous version of this proposal have been addressed in the amended specification.

While we recognize that this mechanism introduces a deflationary component, it’s our view that if the deflation starts to squeeze smaller node runners the RelaysToTokensMultiplier parameter can be adjusted upwards. This proposal is focused on the introduction of a new mechanism with new economic levers that the DAO can calibrate; we believe economic concerns like this can and should be addressed in a separate PUP defining the initial parameter values and subsequent PUPs if adjustment is needed.

We believe that staying horizontally scaled for the chance of getting more relays, relative to the guaranteed multiplier on rewards from consolidating, is a more uncertain strategy. This would mean that, even if outliers adopt a horizontal strategy, the majority of node runners would adopt the more certain consolidation strategy.


Scope of Proposal

This proposal has been narrowed in scope to an abstract parameterized mechanism. Approving this proposal means approving the mechanism of weighting servicer rewards by servicer stake, which is the signal that the core devs need to begin coding it up. Before activating this mechanism, a separate PUP must be approved defining the initial values of the new DAO parameters.

Technical Discretion

The formula, parameters, and pseudo-code outlined in the Specification section are meant to illustrate how this new stake-weighting mechanism is likely to work. The Implementers (PNI core devs) have full discretion on the actual implementation as long as it achieves the same results being illustrated.

Release & Timeline

The changes included in this proposal should be included in the next consensus breaking release for Pocket Core RC-0.9.0, which is targeted for mid-July.


Thanks for this proposal. I think something of this sort could end up being a very elegant solution to our qualms. We just need to ensure that changes that differ from native stake weighting preserve the EV of native stake weighting.

With this proposed modificatiion I believe there is a flaw. Say the mint rate is 1 pokt per relay, there are 1000 nodes with 15k staked, and 1000 relays per day. The total pocket minted per day would be 1000 pokt (each node receives one relay, and it mints one pokt). Then, say we implement this change, and the number of nodes halves to 500 nodes, each with 30k pokt. Now, the total number of relays each day is still 1000, but each node receives 2 relays.

Per this, each node would mint 4 pokt per day, bringing the total mint of the network to 2k pokt a day. I might be missing something, but this appears to happen.

I think there might be something there with modifying the pokt per relay parameter in accordance with the average servicer stake, but I still need to put more thought into that. I like the direction of this though–finding a very simple modification to consensus that serves as an ailment.


Agreed. It’s inflationary and the inflation gets paid to the nodes that do not consolidate.

1 Like

@addison yep you are right, can’t believe I missed that!. You need to weight the bins around the average to allow randomness of servicer selection to alleviate this flaw. Will do some thinking around this tomorrow.

I was speaking to BlockJoe about this idea a few weeks ago. He mentioned that we should also modify the slashes to be also ratio’d. Something to consider!

Ok I’ve been dreaming about this (sad I know) but I think to get it working the way I originally intended is going to be difficult it requires you to set a saddle point in the middle and then have the bins left and right of that saddle point compliment each other out to result in net 0. This is totally achievable but it adds state bloating and will require smaller bins to have the granularity needed.

What did come to me though which may have merit! Would be to weight them as weight/numBins effectively setting the max bin to be the same as it is currently (RelaysToTokensMultiplier) this will result in it always being deflationary from were we are now. If anything it is effectively like upping the min stake but without completely penalising smaller runners and allowing people to consolidate nodes (again with only a few lines of code changing) it will also still compliment Shane’s PUP17 by increasing validator threshold above X faster.

1 Like

I’m dropping my proposal in favor of this approach. I think the complexities are easier to work out on this one.


I have made some alterations to the OP to match my comments from yesterday. Have changed the weighting to be against the top bin and decrease linearly to 1/X being the bottom bin. This solves the issues spotted by @addison (and insures it is the same/deflationary from were we are now as opposed to inflationary with the previous approach). Whilst still preserving the “simpleness”. Like I mentioned it is effectively now like increasing the minimum stake but without completely shutting down smaller runners.


This seems like a great approach, very small code change that will have the desired effect and with less collateral damage than other approaches.

If the policy ends up being considerably deflationary overall it could be balanced by increasing RelaysToTokensMultiplier slightly so the overall effect on mint rate is 0.

I have introduced PIP-23 -Stake Weighted Algorithm for Node Selection (SWANS) which aims to create a simple node selection algorithm based on stake amount. It is only a few lines of code, easy to understand, maintains no state, and preserves the pseudorandom nature of the current cherry picker - it preserves the current logic in its entirety, as it’s only a transformation to the list of servicer nodes. Any feedback welcome.

Thanks for the updates to make this proposal better! I’d suggest one more modification:

I’d add a section to address this concern of too many edit stake transactions hitting the network, even if there isn’t enough POKT to hit the next ceiling.

@luyzdeleon mentioned this potential bloat to the blockchain in the other weighted stake thread. However doing regular edit-stake transactions all the time, when you don’t have enough POKT to hit the next threshold could be a very expensive automation.

POKT servicer APR is currently around 25%. If someone where to do an automated edit-stake transaction after every session, they would have wasted over 350 POKT before there would be enough POKT to hit the next threshold. I highly doubt the network as a whole would waist that much POKT on an automation per node like that.

This automation also would mean that all their rewards are locked up in their node… so if anytime in the next 4 years (with a 25% APR) they wanted to pull out some of their rewards, they would have to unstake their entire node for 3 weeks.

@Andy-Liquify I’d suggest putting a dissenting opinions section and laying out the concern and address it.


I think I miss interpreted @luyzdeleon response previously. I’d agree with your response here that this is not really strictly necessary to implement given the current APY, time to auto compound and the 21 day lockin it would impose. A blocker on edit stake and the fee “burning” this would result in if not at the next bin should be a big enough deterrent.

I will add dissenting opinions to the OP :slight_smile: Thanks for the suggestion Shane.

1 Like

@Andy-Liquify The only thing I would add here is the creation of 2 new DAO parameters:

  1. ValidatorStakeFloorMultiplier which would allow the DAO in any given scenario the power to modify this to something other than MinStake (even if for the time being what’s being proposed is to use the same number as MinStake). This introduces more flexibility in the future to move this reward formula around based on different circumstances. The formula would change to:

reward = NUM_RELAYS * RelaysToTokensMultiplier * ((FLOOR/ValidatorStakeFloorMultiplier)/(X*ValidatorStakeFloorMultiplier))

  1. ValidatorStakeWeightMultiplier which would allow the DAO to control X in your formula, in the case of the opposite effect (make the reward inflationary) would be the desired outcome.

In regards with the potential incentive to Edit Stake on every session creating block transaction bloat, I believe as long as properly communicated and making the “tier gaps” big enough between themselves should prove enough counter-incentive to avoid block bloating.

The only last question I would add to this proposal is wether or not to implement a ceiling here where the consolidation would stop, giving the DAO an effective parameter to control consolidation ratio in the case individual nodes are struggling to process the traffic being sent to them. In this case I propose you include a 3rd parameter called ServicerStakeWeightCeiling indicating an absolute maximum amount of POKT after which the rewards become the same. This creates a “minimum validator stake” baseline which would incentivize to increase the minimum stake of each validator to be at least this ceiling, giving the DAO a new incentive mechanism to secure the network alongside the rest of economic policy being discussed in the Good Vibes and Phase Plan proposals (and any others that could come along).

Thank you for putting so much work and care on this proposal.


Thanks for the input @luyzdeleon! will add the new suggested params in the OP

(for context before I update the OP) This is effectively controlled by X in my OP as

Min(stake.Sub(stake.Mod(minStake),(X * MinStake))

This results in a max weight of no greater than 1.

But makes sense to decouple these from min stake and add the additional DAO parameters you suggested for greater flexibility. This allows the model to be both deflationary and inflationary


Made changes to the OP based on @luyzdeleon suggestions

1 Like

@Andy-Liquify What do you think about revising the current challenge burning mechanism to scale with the same reward formula? For reference see the following function:

If this is not updated alongside this proposal, nodes would see disproportionately smaller consequences to potential malfunction (by malice or negligence) impacting the network’s overall quality of service.


@luyzdeleon I’d love to get your input on PIP-23 as far feasibility and implementation. Seems like it would be a simpler approach to the weighted staking problem, but I’m sure there’s things I’m not considering. Some feedback would be appreciated.

1 Like

Good suggestion @poktblade also flagged this above which I forgot to reply too.

Could do something like this:

# 1. Grab validator data
Validator := k.GetValidator(ctx,address)
# 2. Grab staked amount
Stake := validator.getTokens()
# 3. Floor the staked amount to the lowest multiple of ValidatorStakeFloorMultiplier or the ServicerStakeWeightCeiling, which ever is smaller
flooredStake := Min(stake.Sub(stake.Mod(ValidatorStakeFloorMultiplier)),(ServicerStakeWeightCeiling))
# 4. Calculate Slash weight/ bin
weight = flooredStake.div(ValidatorStakeFloorMultiplier)
# 5. calculate the coins to slash
coins := k.RelaysToTokensMultiplier(ctx).Mul(challenges).Mul(weight)

this will weight the coins to slash linearly depending on the bin your in ranging from 1-ServicerStakeWeightCeiling/ValidatorStakeFloorMultiplier, WDYT?


I believe this to be an appropriate solution!

Thanks for the quick reply.


Sorry, quick question. I’m trying to make sense of this, so I wrote a JS Fiddle to try and simulate the code proposed here

In short, I am starting with 16 nodes staked at 15k by 4 entities:

Assuming RelaysToTokensMultiplier of 1 for simplicity, if I run 1 million relays through these nodes, 100 relays at a time, the earnings are as you would expect:

{A: 187500,B: 562500,C: 62500,D: 187500}

If we consolidate the nodes up to 75,000, we get the following node distribution:

A(45K) B(75K) B (60K) C(15K) D(45K)

If I run 1 million relays through these consolidated nodes, 100 relays at a time, using the algorithm above, the earnings are:

{A: 120000,B: 360000, C: 40000, D: 120000}

Would you mind taking a look at the fiddle to see if I’m missing something , or if I botched the implementation, or misunderstood the formula. We’re basically missing 360,000 tokens.