V1 -- How to Enforce Fairness and Decentralization: A first approach

Man, @RawthiL why you always gotta make my head hurt?

Working on understanding the math here, but would like more clarification on this:

My understanding early on was that reward by relay volume was being replaced by the QoS metric. How does your statement relate to that?

1 Like

Yes, but the mechanism is not clear (to me) are they going to increase the RTTM (or UsageToRewardCoefficient)?
If you serve only a single region today you will work on ~30% of the total traffic, so given the same RTTM after V1 hits, your rewards will drop (outside any QoS measurement changes). It is true that also the number of nodes you are competing with will drop too, but does this compensate? I think not since there is no enforcing of equal node distribution by chain and GeoZone.
Anyway, now I think that maybe 30% is a worst case scenario. Its hard to say how much your rewards will drop but surely they will drop.

2 Likes

Agreed. Last node runners community call discussed multi-region’s negative effect on independent node runners, and in prior node runner calls, the negative effect of multi-chain has been discussed in depth. Credit to @BenVan for originally arguing this point back in the summer (at least that is where I first heard the economics explained).

V1 is supposed to have this feature where multiple “nodes” use the same blockchain storage. Check out: Persistence - Pocket Network

Completely agree. PIP-22 made POKT significantly more complex since each relay on the network is not worth the same amount of POKT… but is valued compared to whoever sent it.

A relay should equal the same POKT regardless of who sent it. That makes the ecosystem easy to understand, easy to develop for, and treats all work the same. Adding undue complexity makes no sense, so I fully agree with @RawthiL’s arguments here that V1 already addresses resource waste of multiple nodes.

By requiring someone to run more nodes vs beef up a single node, then those with more stake end up serving more relays on the network, which is how it should be. The more POKT you stake the more work you have to do. Right now, a node at 15k gets 1/4 the reward for the same work as a node staked at 60k. RPCs will become a commodity in the web3 space, so POKT should lead the industry by making the cost of an RPC measurable, and a weighted stake system make that impossible.

Overall, great work POKTscan for putting all this together in a single place :+1:

3 Likes

Some initial thoughts:

  1. Re Stake-weighted rewards, I empathize with the sentiments expressed above, especially that which @shane mentions re the desire to have all relays be worth the same amount of POKT . Seeing that this concept has been part of v1 spec for longer than I’ve been involved in the project, I strongly suggest that it be coded as its been spec’d and its usage or not be decided by governance action rather than to go in and scrub it from the code.

  2. That being said, I do not fully understand v1 reward structure yet, but as I understand it, the idea of directly coupling the work done by a node to the POKT received by a node the way it is done in v0 is completely thrown out the window in v1 in favor of pooled rewards. Meaning two nodes with identical QoS scores would get equal share of the rewards available in a pool even if one of the two served 2x or 3x or nx as many relays during the pay-out time period as the other node (say as a result of randomness)

  3. Getting rid of stake-weighted rewards, limiting or not limiting regions per node and/or chains-per node does absolutely nothing toward the overall stated goal of decentralization and giving small node runners ability to compete with large node runners so long as there are other mechanisms that remain that allow a chain node backed by deep pockets to gain rewards proportional to POKT staked - whether by persistence, a V1 version of LeanPocket, or something else. If the goal really is to decentralize and prop up small node runners, all such mechanisms will need to be shut down - e.g., enforcing at a protocol level a 1:1 (or 1:n) correspondence between a POKT node and chain node.

  4. Re setting minStake=1000 (as an example) in conjunction with making nodes single-chain/single-region as a method to keep approximate continuity to v0 in reward per POKT stake is entirely doable and a decent idea to be explored further. There is nothing sacrosanct about minStake=15k.

  5. Re chain/Geozone incentives, another dimension that should probably added to the discussion is the cost to service. Some chains or some regions are more expensive to service than others. Or is this automatically accounted for using the “maximize entropy” methodology? (Great methodology idea, by the way!)

  6. I am not sure yet how I feel about the over-arching idea that decentralization and/or equality between small and large node runners should be enforced much less be a top-level consideration. At what cost would these goals be achieved? What good is a perfectly decentralized, small-node-runner-dominated network if the whole project dies because it is not cost-effective compared to the competition. The efficiency and economy of scale that large node-runners bring to the table may be exactly what is needed to make this project successful. Perhaps spreading service among a dozen top providers, a couple dozen niche players and the rise of the class of provider’s mentioned by Vitaly** is all the decentralization that is needed.

  7. I concur with the following quote. For the moment, this would be done off-chain (via instructions given to PNF on parameter settings), but it would be nice to have something like a global RTTM added to the v1 protocol so that this one single parameter could be actively managed with all the per-chain/per-geozone reward settings becoming percentage based to slice up this global reward pie. We can (1) leave v1 spec as is and add this on-chain functionality down the road via a PIP or (2) make a PR to add this to initial release of V1… would love to get input esp from the v1 dev team.

** dAPPs who run a chain anyway for their own purpose and couples it to a POKT node at negligble extra cost for side income stream

1 Like

Re-StakeWieighting, I also do not fully understand the V1 minting mechanism and I don’t know why stake weighting was already there… Maybe someone from PNI can give us some insight later…

I think that matching 1:1 or 1:n Pocket Nodes to Blockchain nodes (real physical independent nodes) is not technically possible in a permissionless network such as Pocket. (I would love to be proved wrong since this will be what we need)

The entropy cannot fix this as this is the effect of off-chain variables (such as real world price of hardware and data). To fix this we would need to add a human (or DAO) in the loop to identify and set incentives for these special cases.
The entropy will only de-couple the incentive of equal servicers distribution from any other kind of incentives, such by-region or by-chain (i.e. archival nodes) specifics.

This is focused for small node runners such as the “dApps” mentioned by vitally. Right now this “dApps” node-runners earn 10% POKT by POKT invested compared to full Pocker node-runners. They have zero costs on hardware as they run their node for other reasons (zero sell pressure). We need to make things easy and appealing for them. In return we will have better decentralization figures at (almost) no added cost.

This will not be that much affected, scale will always be an advantage. I don’t see these changes as deadly wound to large providers.

My hope is that we can change V1 protocol. We are on time to shape it I think. It wont feel clean to be thinking on patches before even V1 is launched… But yes, we need to wait for the team on this, I think they will drop by here after Eth-Denver.

1 Like

This would have been straight forward to do in v0 as it could have been enforced by the portal. Will have to rethink it for the v1 architecture. For v0, enforcement would be at the new stake/change stake level, where each chain association is done not with a chain number (eg “42” but by a unique chain ID (e.g., 42xxxxxxx). A database keeps track of all chain IDs. If an new/change stake command is received with a chainID that is already in the database, the new/change ID fails. When a POKT node unstakes, the associated chain IDs are removed from the database. The question becomes: how would this concept carry forward into v1?

I’m not sure? Is it not possible an entropy metric could catch and incentivize balancing for different costs? Imagine chain A and chain B have same volume of traffic but chain B is twice as expensive to operate. Uniform rewards would lead to natural preference of providers to service chain A over B leading to non-maximized entropy. Without any knowledge of root cause for the mismatch, but only responding to there being a mismatch, rewards for chain B would be boosted to achieve maximum entropy. Am I missing something?

What does “this” refer to in this sentence? The entire research thread? Or narrowly responding something I said in the text you qoute?

I concur with respect to coding a single global reward level with per-chain/zone values that denote a percentage of the global value. I am not sure I concur with resect to making a code-level decision to eliminate stake-weighted rewards without first achieving DAO governance input. Let’s see if we can get dev team input after Eth-Denver

1 Like

There is no way to know if the node runner is actually spinning-up a new chain or s/he is only redirecting traffic to a shared node in the back-end. You will lots of chains codes (42xxxxxxx, 42xxxxxxy, 42xxxxxxz, … 42xxxxxxn) but all will be pointing to the same infrastructure. I don’t see how this solves the problem.

Yes this is possible, but you need the external information that you mentioned, you need to know which one is “twice as expensive to operate” (or any other particularities). Surely this can be added before entropy maximization and optimize the weighted system.

When I was referring to “de-coupling” is that there are two things that must be balanced:

  1. Cost of running a given pair of [chain, location]
  2. Service availability and descentralization, which must be equal for all chains/regions

The first problem is measurable in needed hardware and results in a fixed value. The DAO would be able to discuss this and set the correct incentive.

The second one is more relative and temporary, as setting an incentive changes the network distribution and then the incentive must be adjusted. The entropy avoids this last discussion.

I was referring to the research as a whole plus my personal opinion of setting the staking to single-chain and single-geozone.
This will level the return of POKT per POKT invested (before costs and scale) of “dApp” runners and “pocket only” node runners (who currently stake 15 chains and ~5 regions).

1 Like

You are right. Back to the drawing board.

1 Like

Entering this thread with humility. Not my comfort zone, so please bear with me if I overstep.

How do you empirically arrive at what’s over-provisioned, balanced and under?

I think I have asked you this question on a separate thread.

I know that a Pocket equivalent doesn’t exist yet. Is there anything else in this space or maybe elsewhere that can provide some benchmarks?

I have been thinking about the method to arrive at $POKT’s demand- emissions equilibrium with respect to our working group and my upcoming research blog, and I get stuck at the following to know what demand is-

a) what’s the optimum number of nodes needed to meet the quality and decentralisation standards
b) what are those standards
c) how did we arrive that those standards

I feel finding the number of validators needed is a relatively easier question because there is some benchmarking available for general security standards.

Also, another thought-

Has there been any conversation about the DAO dynamically covering (through funding or subsidies) for the gap in minimum standards of decentralisation (predefined) for the network VS the actual in the network, whenever the actual falls below the minimum standard?

The DAO’s share of emissions could increase under those special conditions.

I have no idea how technically possible or impossible this is to execute but from a network design and efficiency (and therefore quality) perspective, this might make some sense.

Because now we don’t have to put decentralisation/small node runners at the centre stage and inadvertently push other metrics (or priorities) to the backstage, that may be equally or even more important to the end user.

Decentralisation is a metric that we would want to support. It has cost and tradeoffs. First we need to know what decentralisation means to us, the market/end users; which is what I think we are trying to achieve through this research post.

Lastly and related- this place is inundated with node runners (love you all). Therefore it’s natural to have sentiments and opinions sway in one direction.

I would humbly urge us to keep the market realities (or the space), protocol and the enduser in mind in such discussions. Because there could be natural conflicts and tradeoffs.

Sorry for any ignorant comments.

Thanks for reading.

3 Likes

Thanks for joining Caesar,

Here I simply use the ratio of relays to staked nodes: Relays / Nodes.
This is under/over/balanced-provision in terms of traffic only. This is probably the simplest metric that we would want to balance in the network, we want each chain and region to be equally served.

This are open questions I think. As I always say, I think that we cannot control real decentralization, as scale will always favor large node-running entities, the only thing that we can do is remove any extra advantage from the protocol. This is what we are trying to achieve.
Regarding the standards, its an interesting point. We don’t actually know, we have some ideas, like response times below a given threshold, success rates above an other, etc. App user are the ones that could tell us better. Anyway, whatever those standards are, if they can be measured (like QoS), they could be bake in into a metric that reflect how the standards are meet among all chains and normalized afterwards. This would work like compensating the cost of a blockchain/location.

I think not, but also the DAO should not be interfering in the node running IMO, it should only create the conditions to allow fair competition and promote decentralization. Also, the amount of decentralization that the DAO can provide to the network will be really low, as it will only count as a single independent node runner.

I think that this two concepts do not collide. Here we are proposing incentives (sort of) for small node runners to favor decentralization. This also means more nodes in chains that have less traffic (this is actually a problem raised by an app runner @0xMo0nR3kt3r , for polygon archival).
The other metric that we are observing for App service quality is QoS. This wont change regardless on the incentives of staking nodes in a certain chain or region. So, more nodes means higher decentralization (hopefully) but same QoS enforcement.
Regarding other metrics that we are not currently observing, I cannot say. If those metrics are measurable on chain, we can probably optimize for them, but first we need to know them.

I think that this proposal is not very friendly to large node runners, who are the most vocal ones in the forums. I’m kinda surprised that this thread has remained calm so far hahaha

I don’t know why you exactly say this, do you think that this proposal is counter-market for some reason?
Maybe is counter centralization, which can be argued that can bring higher network costs, but is also pro “dApp”-node runners, wich will bring nodes into the system at almost no added costs. I would not say that this will impact the market in any meaningful way.

1 Like

Sorry I missed the following, thanks

I actually didn’t mean DAO running nodes. I meant the DAO subsidising a certain set of small node runners (picked based on some criteria) to keep them afloat (who would otherwise have to shut shop) so that the minimum level of decentralisation (which is yet to be defined) is maintained in the network.

That would allow consolidation (in favour of large node runners) that could also drive efficiencies and at the same time always maintain the minimum level of decentralisation (pre-determined) in the network.

Please ignore if this abstract idea doesn’t make any sense.

Not at all, I am not in an attack mode haha!!

You somewhat answered it yourself here as far as considering the likes/dislikes of end users is concerned-

Not everything is positive-sum. Such as high emissions could work for node runners but is considered “cost to the protocol”. App burn could work for the protocol but could also mean dilution of profit for the gateway.

I am just giving examples to make a point, those may be out of scope on this thread.

I am also sensitive to “over fixation” to decentralisation by decentralisation maxies in this space. Therefore I like to use the word “optimum” or even “minimum” and then hopefully work towards it if that is possible.

1 Like

If it cannot be enforced, it can be disincentivized. Not sure yet how to do this in v1 until I understand the architecture better but in v0 allowing unlimited (or reasonably high) stake-weighted servicer selection (e.g., PIP-23) would have eliminated any need or incentive to develop LP or other n:m architecture since chain node utilization becomes the limiting factor (hence no reason or benefit to front-end a chain node with more than one POKT node). It would have made for a much cleaner architecture and much greater transparency as to the actual state of network provisioning. That ship has sailed for v0, but this ought to be looked into for v1

1 Like

As you say with high enough stake weighting on node selection you could achieve a similar behavior as today without the need of so many Pocket Nodes.
However, without a mechanic that enables stake delegation for servicers, users that want to join a node-runner service must delegate their coins to the node-runners (currently you can use non-custodial staking).
Also, if in the ideal case where each node-runner has only one node, the need to populate sessions will make the node runners split their nodes to occupy more slots in a session. I mean (some random numbers), if a session has at leas 10 nodes but only 6 node-runners exists, then any node runner will split their stake to claim one of the vacant 4 positions. Hence, returning to the original problem, no 1:1 Pocket Node to Blockchain Node.

I think that if V1 protocol can resist the number of nodes, then they would work as a fractional stake weight on selection probability (as they work now).

1 Like

Main post updated. Corrected grammar and writing mistakes. No new content.

Thanks @zaatar !!

1 Like

Now that EthDenver has passed and the writing of the initial post has been improved, it would be great to hear what the V1 devs think of this. @Olshansky @deblasis ?

1 Like

Seconding the desire to get some v1 dev thoughts. I’d hate to see this thread fade into the background. This is an important topic.

1 Like

@RawthiL I apologize for the silence.

I’ve read and thought about your post, but haven’t had a chance to really sit and think deeply about it, modelling out different solutions and alternatives.

In fact, I wanted to point out that we did bring it up and created an issue for it (https://github.com/pokt-network/pocket/issues/557) while talking about it with @bryanchriswhite in one of our PRs.

In order to prioritize other work, I don’t think I will have a good answer to this in Q2, and will revisit in Q3. However, it’s worth noting that we’re building V1 in a way that this change would be “trivial” from a code perspective. I’m also not married to either approach but simply want to model out the economics in both cases (similar to like you have above). I know this is work you’re already thinking about or doing, and I would really appreciate if you could incorporate both approaches

tl;dr It’s on my backlog, will revisit in Q3, would appreciate if you consider both approaches in tokenomic modelling you’re already doing.

2 Likes

Hi @Olshansky thanks for commenting.

This PR is posterior to this post. It addresses some part of the stated problem but from the number of returned IPs by node. I don’t know how much decentralization can be enforced, I only see it as a check to restrict multi-region, but again, that does not mean decentralization.

I know it is trivial to change from multi-region / multi-chain to single-region / single-chain or any combination from a code perspective. In fact you changed from multi-region / multi-chain to single-region / multi-chain without making any in depth study.
If an answer to this will come in Q3, we should at least keep the status quo and stick to multi-region / multi-chain. Under any circumstance is the current single-region / multi-chain logical (as we explain above).

We plan in doing so.


Since the PRs seem to be the way of influencing the V1 development, should I create a PR to change the single-region / multi-chain strategy?
As we are not going to get an answer until Q3 and we still have no justification for the single-region / multi-chain scenario, the most logical course of action would be to set the docs back to multi-region / multi-chain.

1 Like

Sounds good.

Regarding the PR, let’s just leave it until Q3. This is an important decision but a very small technical change in the scope of V1 development and is not impacting implementation that much.

Let’s change everything together at once when resolving is a priority.

2 Likes

What is stopping independent node runners from using other GeoZones? Only proposals that restrict it. There are incentives for implementing features and only a proposal can ruin these incentives. If it is possible for everyone to implement GeoZones why take that away?

Do you think it is possible this could destabilize mainnet in certain regions by restricting GeoZones? Does this help the node runner community by restricting GeoZones? Who does it help exactly to restrict GeoZones that are available? I’m not understanding this because anyone can implement GeoZones.

1 Like