Weight session selection by staked amount

I do have 2 pokt nodes running and Harmony, Polygon, Eth, Gnosis, Binance, DFK. I have 4 servers (each has 32vCPU, 128GM RAM, 2x 3.84 TB NVMe Gen4 datacenter ssds Raid0) and I had to dump Fuse and ionex because it was insufficient (they consumed IOPS from my gnosis and binance blockchains which resulting with fewer relays for gnosis and binance). And a Fantom Blockchain that a friend let me leeching :smiley: .

And I use Hetzner servers that are cheap (100€) *4 → 400€ which in current state of POKT it’s almost even. If I had only 1 pokt i would be losing money. There are ppl I know in USA they pay 1000 dollars per machine

Of course we have a bear market for crypto so the price of POKT is also affected by that. But Pocket team must do the necessary things to start bringing new Applications to the network.

Again running 2-3 nodes it ~breaks~ even so no profit ( ROI is out of discussion xD )

Just a heads up :slight_smile:

1 Like

I think the idea of making/losing money needs to be reframed. There’s no revenue currently really, so you’re paying for the servers regardless via dilution of your stake. POKT going from $1 to 20 cents is partially node runners paying for the infrastructure. There is no breaking even, profit, ROI, as a node runner you’re paying to run the network, 100%, by diluting your own stake with your own rewards. Any money you “make” via POKT earnings is illusory until the app side starts ramping up.

If I can earn POKT at market rate, gain experience, do something I love to do anyway and help the network in the process, sounds like a win to me.

Thank you for the info by the way, very helpful.

1 Like

I think you’ve made this argument in another thread too, and the short answer IMO is no.

Each pocket node has certain hardware requirements. (Exact numbers vary, the latest Node Pilot recommendation is 20Gb of RAM). Stacking nodes on the same underlying hardware still requires that each node is allocated approx 20Gb of RAM. In other words to run 10 x nodes stacked on the same hardware, you need approx 200Gb of RAM. Yes, it’s possible to over provision, but not by a meaningful amount.

By allowing multiple stakes to point to the same node, using the example above, you can run 10 x nodes with 20Gb of RAM. If I’ve understood the concept correctly.

Hope that clarifies it a little

2 Likes

This is correct. 150K pokt staked in a single node.

1 Like

I am suggesting that these stakes should point at the same node… to clarify:

  1. Let each node perform up to 10x as many relays. This will move the hardware bottleneck to the network I/O (where it should be), drastically reduce the cost of CPU/RAM/disk, and eliminate the absurdity of stacking multiple instances of the same blockchain on a single machine.

  2. Allow node runners to freely experiment to balance hardware costs against performance at different stakes-per-node while maintaining the 3-week unstake time.

I’ve been thinking about this since Luis piped up on the node runners call yesterday about weighting sessions being complicated and causing state bloat. But aren’t we overcomplicating it here.

Why weight session chance and not just scale rewards?. Instead of it adjusting the session chance. Keep session chance completely random still and just scale the rewards. Floor your stake to 15k bins and weight the rewards based on the floored value.

reward = NUM_RELAYS * RelaysToTokensMultiplier * (FLOOR/MINSTAKE)

Where min stake is 15k and floor is FLOOR(Stake,min stake ). This would remove all the complexity and not overload highly staked nodes with excess sessions thus reducing end user experience and it is laterally a few lines of code changed in RewardForRelays() as far as I can see (pocket-core/reward.go at 8ad860a86be1cb9b891c1b6f244f3511d56a9949 · pokt-network/pocket-core · GitHub)

Something like:

validator := k.GetValidator(ctx,address)
stake := validator.getTokens()
flooredStake := stake.Sub(stake.Mod(minStake))
weight := flooredStake.Div(minstake)
coins := k.RelaysToTokensMultiplier(ctx).Mul(relays).Mul(weight)

3 Likes

This seems like an elegant solution. I suppose you could argue it’s unfair that some will be paid more than others to perform the same relay work, but I think that’s a compromise well worth making for the advantages stake weighting provides over other proposals.

1 Like

I like that @Andy-Liquify’s solution would allow for beefy nodes to still be servicers and validators.

Question Andy… would there be a way to cap the RelayToTokensMultiplier? Say the variable only goes to 75k (5x the standard 15k for a node), then that prevents going overboard on the node power.

Also… there would be no way to really see the average reward of nodes since all explorers are just reading the chain rewards :sweat_smile:

2 Likes

yep just change weight too

weight := MIN(flooredStake.Div(minstake),SOME_MAX_MULTIPLIER_VAR)

in this case SOME_MAX_MULTIPLIER_VAR would be set too 5

1 Like

Yeah that’s a point but probably some clever people indexing the data that can think of a fix for that! :wink:

But you also could argue that it is more avg number of relays which is more important at this point rather than pokt earned

1 Like

I’ve thought of scaling rewards before. I’m no mathematician but does this preserve the same rewards distribution we had before given that sessions are pseudorandom?

I wrote this outline back in 5/01 -5/8 and it’s interesting to see how we circled back on all points

This will require the same release cycle that the lite client also goes through btw (maybe even rigorous because it’s consensus changing).

1 Like

Yes if it is pseudorandom it will average out to the same distribution over time :slight_smile: .

Of course everything being released into production needs thoroughly testing!. But testing 5 lines of code (inside a single unit) should be orders of magnitude less effort than a brand new client.

I believe this solution does create an incentive for stake consolidation of nodes, which in turns diminishes infrastructure costs. However there’s a few underlying considerations that I want to highlight as we keep seeking for creative solutions such as this one:

  1. The implementation of a ceiling should never carry state bloat (keeping track of average stake, keeping track of stake age, keeping track of more data on the state), which this solution doesn’t so I believe this is in the right track.

  2. One incentive this creates is that nodes will want to edit their stake every session, which means there will be (by current node count) 48k transactions potentially fighting for block space to edit stake, alongside with claims and proofs, creating block space bloat that we don’t want to see. A viable option to deal with that is that this change comes accompanied by a “reward percentage” field in the Node state data, which would indicate a split between how much of the relay reward goes to the Node Account balance and how much of the relay reward goes directly into the Node stake amount, creating an “auto-compound” feature, which should be enough to counter-balance the before mentioned incentive.

  3. I would also recommend exploring upping the transaction fees for claim and proofs, as well as stake transactions, since now more nodes will be fighting for that block space.

Whenever we explore a solution, we must always take into consideration the side-effects, and even if more mechanisms can be created to combat these side effects, the increase in complexity must also be a factor. With that said, I believe this solution is worth exploring.

3 Likes

I think the auto-compounding feature is probably a great solution from a node-runner-experience point of view.

However, in the spirit of exploring this solution while minimizing side-effects, I think we could leave 15k thresholds in place to more closely mimic the current incentive structure:

15,000 stake => 1x multiplier
29,999 stake => 1x multiplier
30,000 stake => 2x multiplier
44,999 stake => 2x multiplier
45,000 stake => 3x multiplier
… etc

1 Like

I get where you are coming from, but even if we do this there’s no systemic defense stopping node runners to just auto-compound every session, causing the block bloat I mentioned in my previous reply, so we would be counting on good will and everyone running the correct implementation. We have seen in the past non-malicious actors trying to over-optimize and cause strain to the network while at it, so I just wanna make sure that we make the correct trade-offs.

1 Like

What currently stops node runners from doing this?

I don’t think this would create any new incentives for that behavior, would it?

Maybe I don’t understand what you mean by “auto-compound every session”

Currently node runners have no incentive to “Edit Stake” because right now they have equal chance of earning the same reward by just staking close to the minimum as possible, if we implement @Andy-Liquify solution people will have the incentive to “Edit Stake” as soon as they receive their relay reward, causing transaction bloat on the blocks.

I’m suggesting we keep the 15k thresholds to alleviate this concern.

Right now, the network has a 15k threshold before compounding makes sense. We could leave that in place by forcing multipliers to be integers.

What I’m trying to say is that even if we put the 15k thresholds, nothing defends the blockchain from receiving “Edit Stakes” until nodes fulfil the 15k threshold, which means that’s not a sufficient measure to protect the blockchain from bloating with transactions. If we modify the reward lifecycle as I mentioned in my reply to @Andy-Liquify to make this auto-compound part of the protocol, we can avoid the risk of this incentive to bloat the blockchain at a systemic level.