PIP-23 -Stake Weighted Algorithm for Node Selection (SWANS)

Attributes

Summary

An algorithm for the implementation of node selection based on stake amount

Abstract

In order to allow for node consolidation and reduce the network’s infrastructure costs, a mechanism for the selection of nodes based on stake amount should be implemented.

Specification

Consider a network with 20 servicer nodes , staked at 15,000POKT each, by 4 entities (A,B, C, D):

A A A
B B B B B B B B B B
C
D D D D D D

In order to reduce infrastructure costs, the 3 entities with more than one node consolidate their stakes. Let’s assume the network imposes a cap of 60,000 POKT per node.

A (45k) B(60k) B(60k) B(30k) C(15K) D(45K) D(45K)

The number of nodes on the network has been reduced from 20 to 7.

In order to ensure fair and distributed selection of nodes for sessions, an algorithm should be implemented to split the consolidated nodes into an array of 15K slots (in no particular programming language):

var nodeList =[ A (45k) , B(60k) , B(60k) , B(30k) , C(15K) , D(45K) , D(45K) ]
var selectionArray =[ ]
var maxStake = 60000
var minStake = 15000

foreach (var node in nodeList)
{
int slots = min(maxStake, node.stake)/minStake;
for (i=0; i<slots; i++) selectionArray.push(node);
}

The resulting array would be : [ A, A, A, B, B, B, B, B, B, B, B, B, B, B, C, D, D, D, D, D, D]

This is identical to the node list pre-consolidation, and current node selection procedure can be applied to this array.

A sample implementation/proof of concept based on above can be found at this JSFiddle

In the future, a function can be applied to the slot calculation to tune the probability curve, which would be linear with this algorithm. In addition, quality can also be introduced into that function which could increase/decrease number of resulting slots based on service quality, to punish poor quality and incentivize high quality.

The computed array can be cached for a period of time for optimal performance.

Dissenting Opinions

TBD

Copyright

Copyright and related rights waived via CC0.

1 Like

If this method is viable, I fully support it.

Is your proposal to integrate this with Shane’s Phased Plan, PUP-17? Or vote on it later and simply tack it on?

1 Like

Thanks for the proposal @iaa12! I’m excited to get more minds on these issues.

Could you take a listen to Luis’ comments regarding weighted stake for node selection in the last node runner’s call and outline how this implementation addresses those concerns?

1 Like

It’s tough for me to address those concerns, because I don’t know what implementation they had in mind. What I can say about this implementation though is:

  • It does not require keeping track of state. It is not necessary to keep track of what nodes got assigned to what sessions in the past. The probability of being selected for a session is identical to what it would be now, and the selection method can be identical to how it’s done now.

  • Horizontal scaling is preserved by virtue of capping the maximum staked per node. Staking more than the cap yields no additional benefits , and it becomes more advantageous to stake a new node. I heard through the grapevine @Jinx 's proposal had a 150,000 POKT cap (10x). Sounds reasonable to me, but I’m open to suggestions on economics.

  • It does not fundamentally change the network’s reward model

  • Amount staked by validators will grow considerably, as the top 1000 would most likely be at or near 150k cap. Thus securing the network.

  • No small node runners get kicked off the network

  • Entry barrier remains low to newcomers to ensure the network remain sufficiently decentralized

  • Great flexibility is created for the entire network to contract and expand with relay demand

  • Holding of POKT earnings is incentivized as adding more POKT to your stake increases your chances of being selected for a session, without increasing infrastructure costs

  • Amount earned per relay remains the same for everyone - equal pay for equal work

  • A mechanism to easily incorporate quality incentives at a later time is created

1 Like

The intention is to supersede all other proposals, as this would achieve all goals without fundamentally changing the network’s economics.

If we look at PUP-15: GOOD VIBES , there’s two parts to it. One is the implementation of a mechanism to drive node consolidation by greatly increasing validator rewards, second part is reducing inflation proportional to the reduction node count. I think if we swap out SWANS as the consolidation incentive and retain the inflation adjustments based on node count, we’d have a good plan on our hands. So I’d see it more as GOOD SWANS.

I’m open to suggestions on economics, I wanted to mainly address the tech side which appeared to be a show stopper as far as moving this type of solution forward. I think we would need an corresponding drop in inflation to complement the drop in nodes, either via GOOD VIBES or WAGMI .

2 Likes

I have created a working sample implementation of the scenario presented in the original post using JS Fiddle . Available here

The sample simulates running 100,000 relays through the non-consolidated list of nodes , 100 relays each , in sequential order . The results for the non-consolidated nodes are :

{A: 15000,B: 50000,C: 5000,D: 30000}

Then the consolidated list of nodes is processed through the proposed algorithm, and 100,000 relays processed on the resulting list of nodes, 100 relays each, in sequential order. The results for the consolidated nodes are:

{A: 15000,B: 50000,C: 5000,D: 30000}

This demonstrates that the proposed solution is viable, maintains earnings for node runners exactly the same as they are now, without impacting any other part of the network, while allowing for stake consolidation and infrastructure savings.

I have created another sample implementation which introduces rudimentary quality incentives. If a node’s response time is below network average, they get a 30% increase in selection probability. If they are above network average, they get a 30% penalty.

Assuming nodes B are underperforming and node C is overperforming, rewards reflect the quality incentive:

Without incentive: {A: 15000,B: 50000,C: 5000,D: 30000}
With quality incentive: {A: 18000, B: 42000, C: 7800,D: 32200}

JSFiddle Demo available here

Feedback welcome.

Thank you @iaa12 for your proposal!

I believe from an economic standpoint, this proposal achieves the same goal as PIP-22 - Weighted staking minus the potential for scaled slashing discussed in this reply: PIP-22 - Weighted staking - #16 by luyzdeleon since this proposal focuses on selection rather than rewards.

However my biggest dissenting opinion about this proposal would be it’s surface of impact in the current implementation, opening the surface for potential bugs and unforeseen issues at the technical level. This change would imply the following impact to the current implementation:

  1. The first impact is that the whole Dispatch and Relay mechanisms would have to be modified, and the QA regression scenarios would need to be augmented significantly to account for the different categories of node stake distributions and re-benchmarked, introducing potential unknown unknowns to these critical functions of the software.

  2. Claim and Proof transaction validation would have to be revised, re-benchmarked (impacting potentially block production times, which impacts session transitions and other block based functionality).

  3. A revision of the SessionCache used for performance in both points mentioned before.

I believe PIP-22 to be safer to implement because it only introduces changes to the Service Rewards and Slashing functionalities which are centralized to only 3 functions in the codebase, with no impact to how applications interact with Pocket Core or how transaction validation occurs, while achieving the same economic incentive mechanism.

3 Likes

@luyzdeleon

Thank you so much for taking the time to review and provide feedback. I really appreciate it.

2 Likes

I would like to go ahead and withdraw this proposal. While I continue to believe it is the best and “right” way to implement stake weighting, @luyzdeleon 's feedback along with the newly released V1 Roadmap makes it clear that a less resource intensive solution that allows the development team to focus on delivering V1 is the appropriate way forward at this time.

As such I endorse PIP-22 as an acceptable stake weighting solution should the DAO wish to venture down that path.

Thank you to everyone who took time out of their day to read and understand this proposal, and for everyone’s feedback.

6 Likes