Weight session selection by staked amount

Problem:

  • When the price of $POKT dips below 20 cents, the unit economics become challenging for many node-runners.

  • There are many levers to fix this, one widely-discussed option is to increase the minimum stake for validators, but I haven’t heard a realistic path to that end.

Proposal:

  • Allow nodes to stake between 15k <> 150k POKT per node (at the node-runner’s discretion), where each additional 15k threshold earns +1 weight for session-selection.

    • a node with 45K POKT staked would be 3x more likely to be selected for a session than a node with 15k staked, but would still earn the same on a per-relay basis.
  • Allow node-runners to transfer their stake from 1 node to another without a 3-week penalty.

This allows node-runners to scale their servers on their own schedule, and lets the free market decide the optimal POKT-per-node (based on performance & cost).

It doesn’t change the barriers to entry, but lets big players introduce optimizations that could get us closer to the efficiency of centralized RPC services.

Philosophy:

  • Trust the incentives. As long as we incentivize performant relays, we should give the competing node-runners as much freedom as possible and trust them to self-optimize.
1 Like

If I could suggest an alternative method - allow multiple stakes to point to the same node. This would automatically achieve the same goal without requiring a weighting per stake mechanism, nor would it require stake transfers.

But it would be worth asking, is this already more or less already taking place by stacking nodes on the same piece of hardware via docker/kubernetes. It would also be worth challenging the assumption that at 20 cents the economics become challenging for node runners- at current earnings of ~40 POKT per day, that’s ~$240/node/month. An OVH bare metal server with 48 thread Intel CPU, 96 GB RAM and 8x2TB NVMe rents for ~$500/ month (edit per below conversation, used to be 3 NVMe , $350), so if you can conservatively stack 4 nodes on there, the profit margin is considerable. And you can probably get more than 4 given the amount of time nodes spend being idle.

I think this idea of unsustainable infrastructure costs comes from pools and their underlying node providers that charge a lot per node. A LOT. I’m not saying it’s not justified, I’m sure there’s overhead, but we should be cautious making network changes that accommodate non-efficiency. If running at very large scale creates management overhead such that it becomes unprofitable to run, then maybe smaller node-running enterprises are the correct answer. Decentralization is, after all, the primary goal of the network.

Our job is to provide our customers with high quality, reliable, innovative service at the best price possible, and to do that we must continuously seek to make our network better and more efficient. Our current rewards scheme does not promote efficiency, it promotes waste, overspend on infrastructure, and complacency. If anything, I personally think rewards need to be further reduced from our current target of 50% to 20%, at a minimum. I’d support 10%.

That’s not correct.

Pokt nodes doesn’t produce rewards. BlockChain Nodes produce it (Harmony, Polygon, Gnosis, Binance, ETH, Fantom)

And you will need 3 machines like the one you proposed → $350/month * 3 → $1050 month. To host them.
If you are unlucky one and have just 1 pokt node/validator you are …

NodePilot seems to indicate successfully running 5 chain nodes and 5 validators on a single NVMe drive, and up to 12 with multiple drives, via Docker. Obviously it will depend on available system resources, based on the per node requirements in the table.

Hardware Recommendations - Node Pilot (decentralizedauthority.com)

This is also wrong :D.
You can’t run 5 chain nodes on one NVMe
If you try to add Harmony, polygon, Eth, Binance, Gnosis chains on 1 NVMe drive
all 5 chains will not be able to sync or stay synced :wink:

Blockchains are IOPS intensive
You probably need Raid0 2 NVMe Gen4 drives for harmony or Polygon to produce good relays (and avoid errors like provider request timeout, out of sync etc.

You can visit NP discord channel and ask :wink:

Ok so you add more NVMe drives, you can get 8 x 1.9TB NVMe drives for an additional $150. The point remains that rewards are sufficient to cover infrastructure costs, and then some. I updated the original post.

I will say, there is a huge void of information as far as hardware requirements, best practices, expected throughput rates, earning potential, unanswered questions and so on. I plan on getting some nodes up and running shortly, and I will fully document the process and subsequent learnings and metrics running those nodes for the benefit of the entire community.

I do have 2 pokt nodes running and Harmony, Polygon, Eth, Gnosis, Binance, DFK. I have 4 servers (each has 32vCPU, 128GM RAM, 2x 3.84 TB NVMe Gen4 datacenter ssds Raid0) and I had to dump Fuse and ionex because it was insufficient (they consumed IOPS from my gnosis and binance blockchains which resulting with fewer relays for gnosis and binance). And a Fantom Blockchain that a friend let me leeching :smiley: .

And I use Hetzner servers that are cheap (100€) *4 → 400€ which in current state of POKT it’s almost even. If I had only 1 pokt i would be losing money. There are ppl I know in USA they pay 1000 dollars per machine

Of course we have a bear market for crypto so the price of POKT is also affected by that. But Pocket team must do the necessary things to start bringing new Applications to the network.

Again running 2-3 nodes it ~breaks~ even so no profit ( ROI is out of discussion xD )

Just a heads up :slight_smile:

1 Like

I think the idea of making/losing money needs to be reframed. There’s no revenue currently really, so you’re paying for the servers regardless via dilution of your stake. POKT going from $1 to 20 cents is partially node runners paying for the infrastructure. There is no breaking even, profit, ROI, as a node runner you’re paying to run the network, 100%, by diluting your own stake with your own rewards. Any money you “make” via POKT earnings is illusory until the app side starts ramping up.

If I can earn POKT at market rate, gain experience, do something I love to do anyway and help the network in the process, sounds like a win to me.

Thank you for the info by the way, very helpful.

1 Like

I think you’ve made this argument in another thread too, and the short answer IMO is no.

Each pocket node has certain hardware requirements. (Exact numbers vary, the latest Node Pilot recommendation is 20Gb of RAM). Stacking nodes on the same underlying hardware still requires that each node is allocated approx 20Gb of RAM. In other words to run 10 x nodes stacked on the same hardware, you need approx 200Gb of RAM. Yes, it’s possible to over provision, but not by a meaningful amount.

By allowing multiple stakes to point to the same node, using the example above, you can run 10 x nodes with 20Gb of RAM. If I’ve understood the concept correctly.

Hope that clarifies it a little

2 Likes

This is correct. 150K pokt staked in a single node.

1 Like

I am suggesting that these stakes should point at the same node… to clarify:

  1. Let each node perform up to 10x as many relays. This will move the hardware bottleneck to the network I/O (where it should be), drastically reduce the cost of CPU/RAM/disk, and eliminate the absurdity of stacking multiple instances of the same blockchain on a single machine.

  2. Allow node runners to freely experiment to balance hardware costs against performance at different stakes-per-node while maintaining the 3-week unstake time.

I’ve been thinking about this since Luis piped up on the node runners call yesterday about weighting sessions being complicated and causing state bloat. But aren’t we overcomplicating it here.

Why weight session chance and not just scale rewards?. Instead of it adjusting the session chance. Keep session chance completely random still and just scale the rewards. Floor your stake to 15k bins and weight the rewards based on the floored value.

reward = NUM_RELAYS * RelaysToTokensMultiplier * (FLOOR/MINSTAKE)

Where min stake is 15k and floor is FLOOR(Stake,min stake ). This would remove all the complexity and not overload highly staked nodes with excess sessions thus reducing end user experience and it is laterally a few lines of code changed in RewardForRelays() as far as I can see (pocket-core/reward.go at 8ad860a86be1cb9b891c1b6f244f3511d56a9949 · pokt-network/pocket-core · GitHub)

Something like:

validator := k.GetValidator(ctx,address)
stake := validator.getTokens()
flooredStake := stake.Sub(stake.Mod(minStake))
weight := flooredStake.Div(minstake)
coins := k.RelaysToTokensMultiplier(ctx).Mul(relays).Mul(weight)

3 Likes

This seems like an elegant solution. I suppose you could argue it’s unfair that some will be paid more than others to perform the same relay work, but I think that’s a compromise well worth making for the advantages stake weighting provides over other proposals.

1 Like

I like that @Andy-Liquify’s solution would allow for beefy nodes to still be servicers and validators.

Question Andy… would there be a way to cap the RelayToTokensMultiplier? Say the variable only goes to 75k (5x the standard 15k for a node), then that prevents going overboard on the node power.

Also… there would be no way to really see the average reward of nodes since all explorers are just reading the chain rewards :sweat_smile:

2 Likes

yep just change weight too

weight := MIN(flooredStake.Div(minstake),SOME_MAX_MULTIPLIER_VAR)

in this case SOME_MAX_MULTIPLIER_VAR would be set too 5

1 Like

Yeah that’s a point but probably some clever people indexing the data that can think of a fix for that! :wink:

But you also could argue that it is more avg number of relays which is more important at this point rather than pokt earned

1 Like

I’ve thought of scaling rewards before. I’m no mathematician but does this preserve the same rewards distribution we had before given that sessions are pseudorandom?

I wrote this outline back in 5/01 -5/8 and it’s interesting to see how we circled back on all points

This will require the same release cycle that the lite client also goes through btw (maybe even rigorous because it’s consensus changing).

1 Like

Yes if it is pseudorandom it will average out to the same distribution over time :slight_smile: .

Of course everything being released into production needs thoroughly testing!. But testing 5 lines of code (inside a single unit) should be orders of magnitude less effort than a brand new client.

I believe this solution does create an incentive for stake consolidation of nodes, which in turns diminishes infrastructure costs. However there’s a few underlying considerations that I want to highlight as we keep seeking for creative solutions such as this one:

  1. The implementation of a ceiling should never carry state bloat (keeping track of average stake, keeping track of stake age, keeping track of more data on the state), which this solution doesn’t so I believe this is in the right track.

  2. One incentive this creates is that nodes will want to edit their stake every session, which means there will be (by current node count) 48k transactions potentially fighting for block space to edit stake, alongside with claims and proofs, creating block space bloat that we don’t want to see. A viable option to deal with that is that this change comes accompanied by a “reward percentage” field in the Node state data, which would indicate a split between how much of the relay reward goes to the Node Account balance and how much of the relay reward goes directly into the Node stake amount, creating an “auto-compound” feature, which should be enough to counter-balance the before mentioned incentive.

  3. I would also recommend exploring upping the transaction fees for claim and proofs, as well as stake transactions, since now more nodes will be fighting for that block space.

Whenever we explore a solution, we must always take into consideration the side-effects, and even if more mechanisms can be created to combat these side effects, the increase in complexity must also be a factor. With that said, I believe this solution is worth exploring.

3 Likes