PUP-25: Non-Linear Stake Weighting For PIP-22

That is true, but thats why we made our analysis over the most stable network (Polygon, currently). Also we did not analyzed directly the number of relays, but the portion of the total relays each day (worked relays by node divided by total relays in the network).

Finally, that assumption is not ours, is the one behind the PIP-22 linear stake weighting. Our document proves that it is not like that.

The assumption of PIP-22 is that the opportunity should not be impacted, but PIP-22 has no ability to control the other Random Variables that impact total rewards. A quick overview of what actually is going to influence rewards is here:

and important to note, that the relay rewards are directly dependent on the individual session that an application has been paired with, for each unique session.

All of those variables and functions are defined explicitly in the document. If there was a PIP-22 assumption that rewards are linearly proportional to number of sessions, that’s an incorrect assumption, however they are related.

And even still, this is an oversimplification, since it can’t be truly assumed that the number of relays coming for a chain ID are going to be evenly distributed throughout the sessions.

3 Likes

I think we agree here. We are trying to prove this. I dont know how what you are saying collides with our document,

The document you shared is quite accurate, however it makes tha ssumption that the cherry picker already knows the nodes response time. The probability of being distributed relays in that session is not fixed, it evolves during a given session. The effect of this meassuring period is strong if the number of relays to serve is small.

We decided to use only hard historical data in our analysis because when we used a simulation we received negative feedback saying that it “was just a simulation”. We included all the sources of randomness that you talk about in our previous analysis.

I think what I might be looking for, is instead of breaking down Polygon by the day, break it down by the proportion by session. Overall, I think the basis of your analysis is sound, I just think you have to narrow in the bins to get rid of some of the other variance to really hone in if this variation is due to PIP-22.

Narrowing it down by session might not be the most straightforward, but I’d be happy to help tag team that effort.

1 Like

Does this proposal include compensation for everyone who gave up 75% of their rewards for 21 days in order to comply with the original goal of PIP 22?
Does this proposal include compensation for everyone who will need to loose 100% of rewards for 21 days in order to restake back at 15k in order to get equal treatment under the new exponent?

PIP 22 is not broken.
It does not favor the 60k bucket over others.
It works exactly as it was intended.

3 Likes

If PIP-24 (Transfer Stake) is approved this will not be an issue. We hope that it will be approved.

I never said that PIP-22 is the problem, PUP-21 is the problem.

We do not agree. We look forward to see evidence that supports these statements.

1 Like

The code is the evidence.

4 Likes

I’m out all this week. I’ll take a look more on Monday. I did a quick scan of the PUP and couldn’t find the actual value you are proposing to set the exponent to (which may be due to my trying to read this on a small cellphone screen). Im not exactly sure what to do with a parameter update proposal that doesnt propose a new value. Perhaps put a stake in the ground even if you havent settled yet on the final number and then new vs current can be evaluated. Later, the proposed new value can be modified based on the ensuing discussion?

Just a couple notes until i can look closer next week. One, re fairness, last week i posted on TG 48 hr data for 4 dif domains that showed that within their own span of 15, 30, 45 and 60k nodes there is no bias of relays roward or away from ine bin vs another. I know that doesnt quite hit the concern you raise, but i do think it is materially important to the discussion.

Second, movement to 4x consolidation over the summer was dominated by major token holders using the occasion of unstaking to move from underperforming providers directly to 60k nodes on better-performing providers. Any analysis of shifts of pie slice between june and now must take this into account.

1 Like

I personally don’t think the exponent should be used to counter inflation because better QoS nodes are in the top bin. The use of the exponent was to disincentive/incentives consolidation (if it is below 1 or above 1 respectively). Setting this to counter inflation is counter intuitive and is a never ending battle as nodes will continue to shift. IMO exponent should only be use to reduce/increase on chain node count.

If higher QoS nodes are in the top bin then this will be picked up in wagmi/fren by looking at the trailing average.

A like for like node (same QoS) will earn the same pokt for pokt stake regardless of the bin. I struggle to see where the 15% you claim comes from other than QoS differences between bins.

3 Likes

Exactly this. No one is “forced” to consolidate. The are incentivized to consolidate to be in a higher bin for the same hardware cost. The use of the exponent as outlined in the proposal was specifically for targeting key node counts on the network to support growth, not to offset the very incentive this proposal created.

1 Like

Hi Mark,

Well this is strange, as I copied the PUP-21 format, that you authored:

New Value: [15000000000, variable see below, 60000000000, 1]

PUP-21 did not defined a fixed value, it proposed a way to set the values (that was not clearly shown). We are proposing an algorithm to set the values in section 5 of our document.

The fairness issue is not due to relay bias accross different bins. The problem is the multiplier value of the first bin. The root of the problem is probably QoS related, but we have no conclussive proof of this, so we are not claming anything in that regard.

It is not important why or how the nodes performed compounding. The linear model is not accurate when it tries to compensate the increase in the number of relays by means of the increase of the number of sessions.

Hi Andy,

The PUP-21 clearly stated:

Motivation 1: Do no harm. To that end we have specified that the foundation adjust ValidatorStakeWeightMultiplier as needed during the transition period to keep the transition from being inflationary or deflationary (that is to abide by WAGMI principles).

The spirit of that motivation applies to the ServiserStakeFloorMultiplierExponent in the same way. If we tune ServiserStakeFloorMultiplier to keep inflation contant, then we should be able to use all the mechanisms of PIP-22 to meet this goal.

I think that an incentive means that you will receive a reward if you perform an action (consolidation in this case). It does not means that you will be penalized if you do not perform that action.
Setting the exponent above 1 will result in the same long term goal. When the whole network is at the top bin then the multiplier for the top bin will be 1^X = 1.

The 15% (wich is an optimistic value) comes from the flawed setting of the ServiserStakeFloorMultiplier, wich expects the increase of sessions to be equal to the increase of relays (see section 4.5), but this is not true (see section 4.2, the result of equation 12 is not 1).
The problem is related to how the CP is affected by QoS, since lower QoS nodes seem to have a more linear relationship with the number of sessions when compared to the high QoS nodes (figure 6, table 5, section 4.3 and the appendix table 22).
There is no bias in the PIP-22 mechanic, your node will receive the same number of relays regardless the stake amount. The bias rises (probably) from how the QoS affects the CP in an increassed number of sessions scenario (due to compounding).

Hi Jinx,

As I said to Andy, the current linear setting of PIP-22 not only incentivizes nodes to compound, it penalizes the nodes that do not perform compounding. The problem is this last part.
I think that it is not possible to keep the inflation constant and target an specific node count with PIP-22. The model was flawled due to expecting a linear relation between the number of sessions by node and the number of relays by node.

The only way of targeting specific node counts and keep the system fair is by means of the StakeMinimum.

It’s interesting to see this right after @steve said in yesterday’s office hours call that 15K nodes are actually earning a net premium versus their 60K counterparts due to an increase in overall session odds. Maybe he should weigh in here as well.

At the moment, the only thing I’m 100% sure about is that I’m 100% unsure I know what’s going on - with regard to the results of PIP22. So, we’re running the test I mentioned on the node runner call to hopefully get clean / controlled and unbiased data to draw a final conclusion. For transparency, I’ve shared with @RawthiL the details of our test approach, along with the addresses of the 250 nodes we’re using in the test so he and his team can monitor and draw their own conclusion. I plan to also share with the Thunder team to get multiple viewpoints / opinions to keep this as unbiased as possible. I’ve asked them not to share anything publicly until the test is done. This is not intended to hide anything but rather to minimize any more speculations. In hindsight, I probably should have refrained from speculating myself on the call because again, I’m sure I don’t know what’s going on at this point.

Regardless, of the test outcome. I do feel everyone is aligned. We all want to be confident we understand how PIP22 is working and trust that it’s working the way it should.

3 Likes

Got it. Thats what i was missing. I’ll take a look on Monday

We, as C0D3R, fully support this proposal. I didn’t do the math, but proposed parameter values doesn’t cause any alarms, either.

  • Small nodes and large nodes do the exact same work. Large nodes getting up to 4x rewards, and getting those out of small nodes’ pocket (in terms of constantly increasing ServicerStakeWeightMultiplier) is not fair.
  • Large nodes contribute to the inflation without contributing to its servicing capacity. Purpose of Pocket is not minting tokens. Pocket has a mission, possibly a world changing one. Smaller nodes contribute to this mission more (up to 4x more!) than the mega nodes. Having more nodes in the network is NOT a bad thing provided that they can be run in a cost-effective way. It gives more capacity, more variety (in terms of supported chains and locations), and more resiliency. Let the node runners figure our how to do it efficiently and cheaply.
  • A well-functioning, robust network needs to be diversified. Today, the only economically sensible option is 60k min stake. Why can’t we reward smaller nodes adequately so that we can have more diversity. 15k nodes have their advantages (such as lower barrier to entry to the network, diversifying assets among node runners, easier time to sell and buy them over OTC, which helps with liquidity). Curbing variety is ultimately not good for the network.

Kind regards,
C0D3R team.

1 Like

I did do the math. And I suspect you have also.
This proposal (if passed) destroys PUP21’s entire purpose and severely punishes anyone who paid the price to cooperate with the goal of reducing network operating cost and node bloat. It guarantees the permanent adoption of 15k node stake. And I suspect you know that as well. Very sad to see you jump on the spin doctor’s “unfairness” narrative. Those of us who paid the price to help the network are the only ones being treated unfairly here.
If you truly don’t see that, then let’s chat man.

4 Likes

If you look back to the June narrative of PIP-22, you will see that the first version of PIP-22 did not have an exponent or concept of nonlinear weighting. It was suggested at the time that the adjustment factor we now call “ServiceStakeWeightMulitplier” (SSWM) could be set to one value now to incentivize consolidation and adjusted in the future to a different value to incentivize deconsolidation when the network was in need, once again, of expansion.

My first involvement with PIP-22 was to prove that with linear weighting, the economics always favored maximum possible consolidation no matter how many or how little relays per day were serviced by a node and no matter how high or low the costs per node was. I introduced the concept of nonlinear weighting and showed how, unlike SSWM, the optimal consolidation choice was highly sensitive to the setting of this parameter, and therefore was a useful knob to add to PIP-22 to have for the future when expansion of nodes was once again needed.

Now, however, is not such a time. Therefore, we opted not to touch this exponent knob in PUP-21 in order to keep the consolidation question simple for node runners. The exponent is the main complexity added to PIP-22. When it is set to 0.7 or 0.5, for example, the choice to consolidate or not is highly individual depending on balance of rewards and costs. So long as the exponent is set to 1, however. all the guess work of how to optimally deploy one’s POKT under management is removed. No matter if above or below average in relays per day and no matter one’s nodes costing $150 per month or $15 per month, it still comes out more cost effective to maximally consolidate identically configured nodes wherever possible. This was demonstrated in June.

Summarizing the above in a single sentence: The original draft of PIP-22 was linear weighting; we to added a nonlinear knob for the future when we may need it to expand node count but opted not to use it for now. How then can you possibly claim that it was not the intended goal of PIP-22 to maximally incentivize node runners who could consolidate to do so.

Reading the dialogue, I do not think the words “force” and “incentivize” mean what you think they mean. How can you claim PIP-22/PUP-21 forces consolidation when there are currently 20k nodes at 15k? How do you claim that creating an economic structure that favors consolidating over not consolidating to be “unfair” and “forcing” rather than “incentivizing” when that is the very meaning of incentivizing? (I’m only addressing this overreach in your case building; I know I haven’t addressed your 15% concern yet – that will be a separate discussion, which I hope to get to tomorrow)

1 Like

No! This is highly irresponsible. SSWM can be set “algorithmically” by PNF to meet WAGMI targets precisely because it only affects aggregate rewards and has zero bearing on the internal decision making of a node runner on how to optimally deploy pokt under management.

The optimal deployment strategy, on the other hand is highly sensitive to the value of the exponent. Therefore, algorithmically setting the exponent will cause node runners to constantly be jerked around in trying to keep up with an ever changing “sweet spot” to configure their nodes according to the whims of the algorithm updates.

SSWM is just an extension of RTTM which was already an algorithmically-adjusted PNF parameter, The only reason SSWM exists instead of just setting the one parameter RTTM is (1) because WAGMI/FREN seeks to adjust RTTM on a ~1 month time scale, whereas finer tuning than 1 month is needed during the transition period of PIP-22. and (2) for the convenience to not have to reach back to WAGMI /PUP-13 to redefine the WAGMI procedure for calculating RTTM.

Before PIP-22:
reward_n = RTTM * relay_n

After PIP-22:
reward_n = (RTTM/SWWM) * (bin_n ^exp ) * relay_n

To think you can algorithmically adjust the exponent parameter which gets uniquely applied to each nodes bin value the way PNF algorithmically adjusts the global RTTM and SWWM values is playing with fire.

1 Like