PUP-25: Non-Linear Stake Weighting For PIP-22

Agreed. The overarching guidance of PUP-21 was to adjust SSWM to meet WAGMI emission targets and the suggested methodology to be used should be tweaked to account for the effect you mention. When this second-order tweak to methodology in setting SSWM is applied, the result will be to hit WAGMI emission targets and exactly the right amount of base rewards for 15k nodes as compared to pre-consolidation numbers. I don’t think PNF even needs a PEP in order to make the needed tweak since the overarching guidance of PUP-21 to set SSWM to meet WAGMI targets trumps the suggested methodology. But even if the methodology is to be updated in a PEP, all that is needed is the tweak in how to set SSWM; there is absolutely no need or justification to have to change the exponent at this time except to accomplish the totally separate issue of favoring 15k nodes over consolidated nodes and start the processes of incentivizing nodes that already experienced a 21-day down time, to deconsolidate and undergoing yet another 21-day down time to keep up with the radically-altered reward structure caused by changing exponent to 0.7.

Separately can you please confirm or prove otherwise my assessment that the data over the last week indicates that the second order effect you identify is within a ±5% bound? By my calculation it sits at ±3% with the main uncertainty being the applicability of the completely separate affect that Andy identified where his provided script does not exclude unresponsive nodes (i.e., nodes that are likely excluded from session selection array and therefore ought to be excluded from factoring into the bin average)

1 Like

Am I having memory issues?

I recall that the original referenced repository contained an analysis which concluded that there was insufficient QOS information to draw any conclusion on a per provider basis.

Can’t seem to locate that now.

Will need PoktScan to answer what was and wasn’t in the repository. I don’t recall . I’m not so sure about there being an insufficiency of information. I would guess it would be more of a matter of imprudence - not insufficiency of data - to publicly report provider-specific QoS information.

PoktScan graciously provided me some anonymized data so that I could do roll-up QoS analysis. Here is what I have found for median latency on the polygon chain:

Latency (ms) Asia Pacific Europe N. America
End of June 579 268 397
2nd wk Sep 413 211 295
Delta (ms) (166) (57) (102)
Delta (%) -28.6% -21.1% -25.7%

Methodology: (1) For each date and each aws gateway (not all gateway data provided), obtain volume-weighted median latency across 14 anonymized providers. (2) Average this result across all available aws gateways in a region (3 in AP, 6 in EU, 4 in NA) (3) Average this result across last three days in June and again for last three days of provided data, ending ep 12

I don’t know about the other providers, but I was still changing domains and stakes until September 23rd. Since the data is “anonymized” I have know way to tell if it has meaning or not.

Since this proposal claims to

  • Enable a stake compounding strategy that is beneficial for the quality of service of the network.

I think that getting verifiable, relevant and sufficient QOS data data sets woul be a priority. I spend a decent amount of time tracking the various providers without the benefit of “private” access to gateway logs. It’s not easy and I question how accurately the categorization of the 14 anonymous providers is.

1 Like

I think that you are not understanding the problem. Please read the report, the problem is not in the QoS of the different bins nor in the return of POKT per POKT invested.
The problem is that the model used to justify the linear modeling is wrong. The linear stake weight model cannot meet the fairnes and inflation objectives at the same time. Right now is only fixing inflation at the cost of unfairness.

We have seen a major change in the network in the last 15 days. The bins QoS is stabilizing, making the original algorithm accurate in its goal of keeping inflation constant. This change has nothing to do with the PIP-22.
This improvement in the QoS dispersion among bins does not change nor invalidates our findings. Our data is from before that time on purpose, to avoid large chages in QoS from affecting the calculations.

I dont know which is the original repository that you mention. Our first report was conclussive and we show a ~30% difference between the expected relays and the simulated relays. That document was disregarded due to been “just a simulation”. That why we are doing it now using only network data.
In our last report we dont make specific claims about QoS because it would have requiered us to model and show such claims (as any claim shoud do). However it can be seen in the report that a difference exists between low and high QoS providers, specifically on how accurate is a linear model based on the increment of relays. This diffence does not tell us that the low QoS nodes can be linearly modeled, it just tell us that a linear model fits better for them than for the high QoS nodes.
The values of the linear determination coeficients (tables 5 and 22)for each group of nodes are clear. The increment of sessions is not linearly related to an increment in relays. Low QoS nodes seem to follow a more linear relationship than high QoS nodes, but we cannot say that they do. The very existance of this difference proves that the linear modeling is not possible since the QoS clearlly affects the increment of relays due to the increment of sessions.

We refrain from posting provider-specific data since some providers may want to keep their numbers private. Node runners are generally secretive regarding their operation and this data could be sensitive.

The included provider were selected by their domain name. Your domain names (benvan*.*) dissapeared from the network. If you have new ones they are too recent to be regarded as “stable”.

The access is not private, many node runners have access to this information. We only collect and save the data (not an easy nor free process).

You can find the aswer to this question on page 16 of the report:

The QoS of the node runners was obtained using the CP data. The av- erage QoS is obtained as an average of the median response time in the different cherry picker gateways, weighted by the gateway number of relays.

which is almost the same proces that @msa6867 descrived earlier:

The only difference is that we also used an average accross all the days under study weighted by the traffic on each day.

This is a response to @msa6867 presentation on the node runners call. The presentation touched some sensitive points of this proposal, some of which are not accurate.

You can hear the presentation here:

and see the slides here:

I will try to keep this as short as possible, since the justifications are all in the document provided with this proposal.


  1. Mark talks about boosted rewards. They do exists, but they are wrongly estimated.

  2. Multiplying the staked POKT by two means that the rewards are divided by two. Wrong, this is the point that we are proving, it all depends on your QoS. The linear relation does not hold.

  3. In slide 13 an example of how the Cherry Picker assigns probabilities is show. This example only analyzes the CP after it has ranked all the nodes in the session.

Finally it was slipped that we had an “agenda”, not sure what he meant by that. We have disclaimed our affiliations and our business is no secret).

What we find misleading is claiming that:

This is not credible, as @msa6867 is the author of PUP-21 which is being replaced if this proposal passes and he probably also owns nodes which are probably staked at the maximum bin (just a guess but it goes in line with he’s arguments).

(Long version now, you can stop reading if you want)

  1. Mark talks about boosted rewards due to the reduction of nodes, and that this boost disappears when the PIP-22 was activated, due to the base node multiplier.
    This is correct, what we are saying is that the applied correction is wrong.

  2. In the presentation an example is given, were the duplication of the staked pocket results in a reduction by half of the served relays (regardless the existence of PIP-22 or not). This is the thesis of the linear model that we talk about in section 4.5. The only difference is that instead of a relation of number of nodes and served relays he uses a relation of staked POKT and minted POKT.
    We prove that this is wrong.
    This conclusion does not take into account how the effects of the QoS on the CP. For this model to be accurate the determination coefficients (D) should be near 1.0, but they are far from it (see table 5 in the main document and table 22 in the appendix).)

  3. In slide 13 an example of how the CP assigns probabilities is show.
    The conclusions are correct, there is no change in probability, this is not our point.
    The problem here is that the given example is just a snapshot of how the CP works after it has already measured and ranked the nodes. In real life the CP goes to a transitory state were it measures each nodes. During this process the CP gives each node equal opportunity to serve some relays. None of the models provided by @msa6867 take into account this effect, our simulations did.
    Although we cannot fully describe the works of this transitory state of the CP, we can measure its effects. During this transitory state is were low QoS nodes receive relays more often (simple math here). Even when this is a short period of the session its effect is not negligible. More sessions means more transitory states for a low QoS nodes, and hence more gains. This can explain the differences observed in the determination coefficients for low and high QoS nodes (once again, tables 5 and 22). Low QoS nodes served relays is more linearly related to the number of sessions, (but what we want is to be fair with high-QoS nodes, not low QoS nodes.). Finally even if this effect were negligible the difference in the determination coefficients cannot be explained by the model used in the presentation.

I’m too stupid to follow all of this and to be frank, I don’t have the time to follow all this. All I can say is our economic and rewards model smell, and it smells very bad. These are the side effects of passing PIP-22. When it comes to the point where our whole community barely understands what’s going on and fundamental software (indexers and statistics tools) takes a lot of time to rewrite/adjust, we should consider taking the more seriously This is tech debt we’re paying, right off the bat.

Thank you Poktscan team in general for digging deeper into the side effects of PIP-22 and taking the time to provide data to back up your claims. Whether it’s actually valid or not, to re-iterate, this whole ordeal is the root cause of a deeper problem.

I can’t confidently support this proposal because I have no idea what’s going on, but at the same time, I don’t want to discredit all the hard work y’all have done to prove this out. That leaves me in a dilemma - and something to consider, as I’m sure I’m not alone here in this. Perhaps, this is a strong indicator we should try to fix the problem at its core - stake-weighted sessions or something even better.


I think you are missing the point. Quite a number of times verbally and in disclaimers on the bottom of the slides I emphasized that what I presented was not yet taking into account the PUP-25 concern re different QoS averages in different bins. Perhaps you and your team understand all the nuances involved, but the average noderunner has been left with confusion and a completely false narrative as a result of the oft-repeated mantra of “unfairness” and 15k nodes being “penalized”. Perhaps the sowing of this confusion was unintentional. Or perhaps it was calculated. I do not know. It was my desire to clear up the misperceptions that exist in the community PRIOR to even being able to begin holding a level-playing-field discussion re PUP-25

True. And the point is??? My perception is that most of the participants on the call have no clue what the technical issue is re different QoS in different bins, and slide 13 was trying to give a simplified pictorial to help those on the call understand how different QoS in different bins can lead to the mint rate being off when calculating SSWM from avg bin size. And you are going to nitpick that I did not crowd every last nuance onto the slide and render it unreadable in the process?

I’d have to go back and listen. I’m pretty sure I was talking re myself that I did not have an agenda or anything to gain one way or the other; I’m pretty sure I didn’t state that your team or service did; and I will continue to refrain from saying that; each evaluator must make that determination for themselves. However, I am more than glad to go on record to say that, IMO, Cod3r’s sugary comment a few days ago in favor of this proposal was completely self-serving, as they are by far the biggest beneficiary of the the POKT reward redistribution that this proposal is seeking to undertake. I believe that @BenVan has already pointed this out in so many words.

Unbelievable! Are you really accusing me of having a bias based on ego to preserve values proposed in PUP-21?? It would have been impossible for me to continue contributing to this ecosystem in the face of some of the community backlash I have received without a thorough dying to ego. The moment I feel any parameter setting in PUP-21 needs changing based on system circumstances, I will be among the first to propose it. And I have been very candid about the need to update the method to calculate SSWM. I have been in discussions with @JackALaing , @Andy-Liquify , @Cryptocorn and @KaydeauxPokt since August 31 on this topic - monitoring, assessing and considering best course of corrective action. But dropping exponent to 0.7??? That, IMO, is not a bona fide “solution” but a complete killing off of consolidation and a return of the system to all 15k nodes. Again, as @BenVan has already pointed out.

If “ego” had really been a factor, you may as well have clamed that since I was the one who all but insisted that the exponent get added to PIP-22 (to have a knob in the future to incentivize de-consolidation) that I would be eager for any chance to change it’s value to exp<1 to justify that the added complexity was a worthwhile add.

I have been completely transparent about my holdings in other places. I have one node staked with a custodian at max bin, but my SLA is pure rev share, so it makes absolutely no difference to me, reward wise, whether that 60k is staked as one 60k node or four 15k nodes. In addition I have some pokt staked with two fractional pool providers, one of which runs 15k nodes almost exclusively and one of which has nodes at all four bin levels. Thus I have absolutely nothing personal to gain from keeping exp=1; rather, a slight edge, if anything in lowering the exponent. But I don’t advocate lowering the exponent, because we are still in a season where we do not need to encourage the growth of node count.

I would be absolutely thrilled if we could get stake-weighted sessions back on the table. I think dev-wise it is completely doable, and worth the spend, given a year or so bf v1. I believe we can do it without touching either the session-selection code or the cherry-picker code

I completely agree with this :point_up: The amount of time and effort having to go into accounting for all the changes is hurting the community and project IMO. If the community can’t understand basic rewards, and only technical insiders truely understand the nuances, this creates natural blocker for more folks getting involved in POKT and in node running.

My reservations regarding weighted-stake mechanisms (whether PIP-22 or something like Good Vibes) was it was going to make POKT significantly more complex and make the economics unapproachable. That has indeed happened and as @poktblade mentioned, the technical debt is already significant, and there is still a lot of work to do to iron this out.

Sooo… Instead of further complicated by neutralizing PIP-22 for doing weighted stake via weighted session selection (as both @poktblade and @msa6867 have suggested), why not just up the minimum stake to 30k and get our economics back where everyone can track and understand?

Wouldn’t this require unstaking?

Yes, but the whole network would have progressively unstake, just like when all nodes were at 15k. Upping the min effects everyone (except for the 481 nodes that are at 30k… they don’t have to unstake). The network is basically in the same position we were in prior to PIP-22 where upping the min to 30k would effect all nodes equally. If nodes today staked at 30k… there would still be less than 22k nodes, which means we still won’t have significant chain bloat (like we were with 45k+ nodes in the summer), and be back to classic POKT economic.

If team technical efforts were instead put toward the lite client (which most of the network is running on already), then those 22k nodes could theoretically have the resource footprint of a few thousand nodes. PIP-22 was originally proposed as a way to reduce the network’s cost, but the lite client can do it more from my understanding.

I’m NOT convinced weighted-stake thus far has been a net positive for POKT because of the divide it is creating. From my understanding, most nodes are on a lite client anyways. Though PIP-22 has had good effects in addressing chain bloat, it has not produced the desired economic effect on the token… which was the driving reason to embrace node consolidation. PIP-25 is now trying to patch 2nd order issues that came with PIP-22, which would significantly complicate our economics even more by huge proportions.

What is the reason we should NOT pursue upping the min to 30k, since it will effect most all nodes evenly??

what are your thoughts of straight up changing min stake to 30k w no change to pip-22 parameters? No one at 30, 45 or 60k would need to unstake and those at 15k would at most have to unstake half (or add new) to get up to 30, so between summer and now raising minstake, no group is subjected to two episodes of unstaking? (The much tighter clustering of node stakes causes any “second order effect” to all but dissapear).

Or why not just move it to 60k?

From a system perspective it makes sense, but at what cost? That is what I do not know. Who gets driven from the ecosystem who cannot or will not consolidate? If it is just laggard misconfigured nodes that have been all but forgotten, then great… but what if we end up throwing out quality diversity that is desirable for network health and robustness. Those are the questions I’d like to see answered before passing a motion to raise minstake. I would not presume that the groups most affected by a decision to raise minstake are well represented or vocal in the various social media channels, so we as a community ought to be proactive in doing discovery in this area if we are going to seriously explore that avenue.

If we do this, then we’re going to lead to mass unstaking again, and de-consolidating the network, except for those folks trying to be validators, at which point I believe we might need to increase the validator percentage of the relay-pie again, therefore creating varying incentives at the edges (15k, and 60k+).

I can’t pretend I know what the solution is here, but it feels like we released (or canary-released) Stake-Weighting, Sustainable Stairway, FREN, and LeanPOKT all so close to each other that we’re not able to take into account the affects of each change in isolation.

However, with that being said, changing the minimum stake to 30k, or 45k, or 60k, seems like we’re just kicking the can down the road, and penalizing non-whales or long-term holders, which would have two effects:

  • Bad PR for the project
  • Force those who still are interested to unstake and move their smaller stakes to the few providers that support 0-node-minimum staking with revshare (e.g., poktpool, sendnodes, tPOKT, c0d3r), which would further centralize the network amongst larger providers.

I reiterate that I don’t have a solution here, but I am just expressing a few personal views regarding some of the potential solutions mentioned here.

I also do agree with @poktblade that our economics are hard to follow. I hold advanced degrees in physics and even my head is spinning with some of the stuff we’re doing here. While I appreciate the work that @Andy-Liquify, @msa6867, and @adam have done over the course of the year to get inflation and consolidation sorted out, I do think we need to figure out something more tangible, seeing as two of the larger node-providers here (POKTScan, C0D3R) are in support of this proposal.


hmmm. its a nice soundbite, but is it true? sustainable stairway is completely orthogonal to the rest and hits the demand side not the supply side. While it is rather unfortunate that FREN hit at same time as stake-weighting, it is rather trivial to apply an appropriate discount to adjust for FREN lowering the RTTM. Please refer back to slides 6-8 of the slide deck posted above for how to adjust for various factors (including RTTM) that have changed since June. And see slide 9-10 to see actuals for september for comparison.

But point taken re LC and stake weight hitting at the same time.

Both LC and PIP-22 are v0 solutions to provide infra cost relief in the interim between now and v1 release. Both go away in V1. Note, however that it was determined quite a while ago by the powers that be(long before Andy introduced PIP-22) that stake-weighted rewards will be permanent feature of V1 (not in the form of PIP-22 but via a completely different mechanism). Is it really therefore out of line or too complicated for node runners to factor in how vertically vs horizontally they want to deploy their POKT since they are going to have to do it in V1 anyway?

I realize this is veering off topic of the technical concerns poktscan has raised. But that raises the main comment/question I guess I would have for poktscan: WAGMI proposal changed rewards structure by over a factor of 3. FREN changed reward structure by another factor of 2. LC and PIP-22 both sought to provide factor of 2 or more cost savings to the network and both seem to pretty much be achieving that. By contrast, from July until present poktscan has invested how many man-months now contending over what - a potential ± 10% effect that may meander a bit as noderunners upgrade their QoS but by and large will average over time pretty close to zero. Is all that effort really worth it? At what point is it making a mountain out of a molehile? Further, as @BenVan pointed out a couple times all the way back in July, it seems the main concern keeps coming back to the cherry picker. Has it ever occurred to poktscan to unleash the tremendous amount of analytical talent you have on optimizing the cherry picker rather than being single-focused on lowering the exponent?

just wanted to give a shout-out here to @cryptocorn here for spearheading FREN…

Hmm, is this true.? Should the DAO feel obligated to take an action just to satisfy a large node runner? Even if the action is to skew rewards in their own favor? The problem I have with the Cod3r comment is that it is not even addressing the technical, second-order, ±10% effect that PUP-25 is all about… by their own admission they “have not done the math.” The quotation you pull from is an excellent argument against raising minstake but is almost completely non-sequitur as relates to the current state of the system of PIP-22 with PUP-21 parameters. Lets take a look:

“A well-functioning, robust network needs to be diversified.” True. As it is today with the current parameter settings; no need to change exponent value in order to achieve what the network already has.

“Today, the only economically sensible option is 60k min stake.” False. A quick look at poktscan explorer will reveal 21k nodes who have found various economically sensible reasons not to stake to 60k

"Why can’t we reward smaller nodes adequately so that we can have more diversity. " False Premise. Please refer again to slide 10 to see that smaller nodes are pretty identically rewarded today to what the reward would currently been (after WAGMI, FREN etc) if PIP-22 never passed. If Cod3r feels rewards are not adequate, there beef is with FREN / emission control measures, not with PIP-22, and a completely separate proposal should be put forward to change the way RTTM is set so as to raise emissions.

“15k nodes have their advantages (such as lower barrier to entry to the network, diversifying assets among node runners, easier time to sell and buy them over OTC, which helps with liquidity). Curbing variety is ultimately not good for the network.” True, true, and true. And all completely non-sequitur as relates to PIP-22/PUP-21 (but are good discussion points re any discussion to raise minStake.)

Thanks for the quick response. Will answer in turn.

  • You’re right, it is a nice soundbite and you’re right, that Sustainable Stairway is orthogonal. I typed this up late very last night, so my faculties were not 100% their sharpest.

  • Re @Cryptocorn - Thanks for giving credit where credit is due.

  • Should the DAO feel obligated to satisfy a large node runner? No, absolutely not. HOWEVER, I believe the majority of DAO votes are node runners for either themselves or for businesses they built on top of the protocol, so even if I don’t think we should optimize for them, they do have a large portion of the vote (if memory serves me right) and may inevitably skew the DAO to vote one way (or may have already).

As for this soundbite of yours:

“Today, the only economically sensible option is 60k min stake.” False. A quick look at poktscan explorer will reveal 21k nodes who have found various economically sensible reasons not to stake to 60k

I am someone who still has many 15k nodes because the places I have them hosted are not charging me any differently for having, for example, four 15k nodes vs one 60k node, so I never had to unstake and incur a 21 day penalty, so I chose not to. So this assumption of yours isn’t completely true, at least not in my case, at least with respect to 15k vs. 60.

Moving on, I believe you have rightly deduced that the issue may not be PIP-21/PUP-21, but with the other proposals, and with your final quote about retaining 15k nodes - I agree here wholeheartedly.

I think what I’m trying to get to here, albeit poorly, is what @poktblade and @shane said in plain-english:

I completely agree with this :point_up: The amount of time and effort having to go into accounting for all the changes is hurting the community and project IMO. If the community can’t understand basic rewards, and only technical insiders truely understand the nuances, this creates natural blocker for more folks getting involved in POKT and in node running.

This, at the end of the day, may be a documentation/information-dissemination issue more than anything else, and what we’re doing here is going back and forth with MSA on one side and POKTScan on the other. This might be a good place for PNF to outsource the creation and documentation of all of these changes, including explainers on how all these changes work, pre-post, and how they work in combination for each other.


I agree that this discussion has extended and is hard to say that is really useful for the community.

I was hoping to get hard feedback on the analysis that we provided but I ended discussing over the same simplified (and flawed) model.

  • No challenging point was raised against the metrics that prove that the fairness issue exist.
  • No new supporting arguments were presented to support the linear model. Specifically, arguments that can reconcile the current values and the metrics observed in the report.
  • The only analyzed point was the one that was the QoS inhomogeneities in the different stake bins (section 4.4). This was not the central issue that we raised and it was not solved due to PIP-22 mechanics.

I really was expecting you to read the long version.

You should not claim to be unbiased, we could do the same as our reports were done in good will. You should refrain to disclose your relations to the past proposals and your inversions. The community will then decide whether there is bias or not.

I feel that our discussions are going nowhere, you are focusing on the wrong topic and disregarding most of the analysis that we made. Once again you are labeling the issues we rise as “secondary effects”.

I totally understand your position. Our argument is technical and is a follow-up from previous technical issues that we raised during the PIP-22 voting phase. Sadly some topics cannot be simplified without making big mistakes (thats precisely what we think that happened with linear weighting).

I’ve never been a fan of PIP-22, my intention with this proposal was to correct the unfairness that we observe without changing the rules once again or changing the status-quo as little as possible.

In POKTscan we had received lots of question around PIP-22 and created lots of metrics to address the new network landscape. It was not easy to keep the numbers true and the community happy, we are still going through iterations on how to show the numbers to better serve our users. Sadly I cannot say that after a month of PIP-22 the community really gets even the simple stuff behind it (leaving aside our in-depht analysis). This is not a judgment on the community but on ourselves and the (perhaps unecesary) complexity we are pushing into the system.

If you ask me, personally, I would have been more comfortable with rising the minimum. Back in PIP-22 discussion times that was my main concern, was PIP-22 a twisted way to up the minimum? they told me that it was not. Sadly these were all talks outside the forum (a lesson learnt here).

I think that the answer is un-popularity of the measure. But it will be fair in the end. Up the stake only changes the entry level and then keeps things fair and simple.

We have analyzed the Cherry Picker in depth. You might like the CP or not, it might not be perfect, but it is working and is not making things that it was not intended to do. At least not in sensible amounts.
We went even further and modeled the whole thing using a statistical model. We backed the model against real life sessions and found little deviation. I can say that the Cherry Picker is working as it was designed. If we do not unleash our holy wrath upon the CP is because we have not found a reason to do it. Maybe it is because we don’t have enough documentation? cant say. As @ArtSabintsev pointed out, it is hard to find documentation of the changes and the features. Even PIP-22 / PUP-21 documentation is lacking.

We focus on the PIP-22 because:

  • It changed the rules for those who did not wanted to enter the consolidation game.
  • It is only rising the min-stake slowly and bleeding non-consolidated nodes in the meantime.
  • We warned the community that this could happen during the PIP-22 voting phase. Sadly our voice was not loud enough.
  • We talked to the PIP-22 proponents before the voting phase to warn them about this issue.
  • We accepted the proponent’s concern that our arguments were based on simulations and waited to have data before returning to the subject.
  • Opinions based on data are essential for building a solid project.

What does that accomplish? My entire argument is that having weighted stake with variables that can be tweaked to show preferential treatment towards a weight class creates class wars and makes POKT significantly more complex to understand.

How is nuking 15k nodes going to address either of those concerns? Nuking just one of the node classes to the benefit of the other classes is literally the opposite of what I’m suggesting.

If you want to argue that all nodes should be required to be in the 60k bucket, that in principle is what I’m suggesting… vs having tension between different weight classes. At 60k, PIP-22 would be naturalized anyways… because all nodes would be in the same bucket (which is the way it should be). We can debate on what the min stake should be, but 60k seems ridiculous when we can accomplish the intended effect of PIP-22 with 30k and the lite client.

What “can” are we kicking down the road with upping the minimum stake? I’m not tracking your meaning.

Regarding your effects… Bad PR is already upon us with the tension in the community around which node class should be in what way. Trying to “balance” the classes, because of 2nd order effects of PIP-22, are why we are here today.

The argument of centralization can be said about any changes to the network. Chang our economics so that everyone has to rely on the authors to understand what’s happening, results in much “centralizing” in POKT.

BTW, POKTscan and c0d3r are the only network monitoring tools folks have, and they can’t even decide on what the network average is per node. Having an economic model which can’t be easily understood will drive centralization of control to technical insiders who have data or data skills that the rest of the ecosystem doesn’t have.



That is where we are today. Long time POKT participants and core-contributors can’t follow our current economics without getting confused at the results… so that will lead to centralization where those with the resources and teams to figure these nuances out, will take the business regardless. That is far more centralizing in my mind then fairly requiring ALL to be in the same economic bucket, where all node runners are on the same team and treated equally.

1 Like

This is getting pretty thick man.
Every single aspect of your stance has been challenged and successfully debunked.
PIP-21 works exactly as designed and is in no way “unfair”.
No matter how many times you repeat your claim or how many pages you add to your PDF you have not “proven” a single thing. Your data set is insufficient (in size), inappropriate (in scope) and your interpretation is biased.

There is only one source of unfairness in the network and it is intentional. The Cherry Picker rewards high QOS nodes more than low QOS nodes. The fact that your “metrics” can’t even confirm that simple fact is a further expression of its inadequacy.

What? no “new” arguments? How about the “old” (and still obvious) argument? The code rewards the 15, 30, 45 and 60k bins EXACTLY fairly. Daily and weekly bin analyses by provider shows this clearly.

I don’t know if this is ego (fallen in love with your conclusion) or if this is agenda (PoktScan has almost exclusively 15k nodes) but whatever the reason this proposal seeks to punish those who have already taken a 21 day income loss in order to support the network and force us to take another 21 day loss in order to get into the 15k bucket so that we can get fair treatment.

This proposal should be withdrawn.


100% agree with every word said by Benvan above.

Only direction ServicerStakeFloorMultiplierExponent should be changed is to >1 to encourage further consolidation as was originally intended.

Thank you for your answers. Please don’t read by my asking the questions that I support either of those positions behing my questions… I used the questions to flesh out the thought process behind your original comment. I am taking very seriously and soberly the concerns you and @poktblade have raised regarding tech debt.

@ArtSabintsev asks very good questions regarding what then should be the next course of action. I will give that more thought. I know that from a tech debt perspective, lowering exponent is not the solution. In the current (linear) configuration the one main consideration re whether to consolidate 4 similar nodes into one or keep separate is the 21-day down time it incurs. Apart from that factor, you might as well consolidate. And as I said above, regarding that relatively low level of complexity (“to consolidate and face 21 day down time or not to consolidate and eat the extra infra costs”) we might as well get used to since it is a permanent feature of v1 totally independent of PIP-22. But setting exponent to anything less than 0.9 or 0.95 and the answer becomes much more complex which is one of the main reasons we avoided it in the first place .

In retrospect I rather regret introducing the notion of nonlinear in the first place. Poktscan would never have dreamed up and insisted that nonlinear weight is the panacea they are making it out to be if the hook wasn’t there to begin with.

Poktscan has indicated several times that they plan to update this, in an upcoming release, to show actual base reward for 15k bin rather than the “base rewards times SSWM” that they currently show. Then it will match cod3r. IMO it was a mind-boggling design choice to report “base rewards multiplied by SSWM” and label it “rewards” and somehow think that was going to be intuitive or helpful for the community. I try hard to always believe the best but when I see how much damage this one design choice has caused - how much confusion, fear and noderunners looking at their numbers and seeing that it was less than average poktscan showed, etc, etc etc, that I can’t help but wonder if the purpose of the bizarre design choice was not, in fact, to intentionally sow FUD regarding PIP-22 and give node runners the impression that they were getting gypped of what they were due. In essence it is sowing the subconscious message over and over again. “You’re rewards ought to have been around 41 and would have been if those PIP-22 folks didn’t go and mess up the system. Instead you only get 26”

Sometimes I wonder, is linear weighting “I get the same amount of pokt whether I stake my 60k total pokt on 1 node or 4 nodes” really that complex to get? Or is it just the strain of 2 months of being hammered with the incessant messaging that it is broken that is driving confusion and division in the community?

And replace it with the following:
** The data does in fact match between poktscan and cod3r. The two have just chosen two different methodologies to report the data. It is my understanding that in an upcoming release it is PoktScan’s intention to update their methodology to be similar to that currently used by cod3r, at which point the numbers displayed should be identical. **

Sorry, but I disagree. Poktscan had a fair way of displaying on chain data - albiet less user intuitive for some, and they quickly re-iterated on this on release and have implemented multiple user suggestions since the release of PIP-22. Let me be fully clear, the amount of code changes I’d imagine they had to account for in their indexer is not something to be taken lightly, and they made an amazing effort to continue to support the community by providing on chain analytics for node runners to view. UX is difficult, showing the right metrics on a new feature set can be refined. Give it some time, this is technical debt I am talking about.

Just false. I don’t see the need to drop to this level of scrutiny and accustations to act as a counterpoint for the above proposal, and these type of comments will derail the proposal at hand.