Weight session selection by staked amount

I like that @Andy-Liquify’s solution would allow for beefy nodes to still be servicers and validators.

Question Andy… would there be a way to cap the RelayToTokensMultiplier? Say the variable only goes to 75k (5x the standard 15k for a node), then that prevents going overboard on the node power.

Also… there would be no way to really see the average reward of nodes since all explorers are just reading the chain rewards :sweat_smile:

2 Likes

yep just change weight too

weight := MIN(flooredStake.Div(minstake),SOME_MAX_MULTIPLIER_VAR)

in this case SOME_MAX_MULTIPLIER_VAR would be set too 5

1 Like

Yeah that’s a point but probably some clever people indexing the data that can think of a fix for that! :wink:

But you also could argue that it is more avg number of relays which is more important at this point rather than pokt earned

1 Like

I’ve thought of scaling rewards before. I’m no mathematician but does this preserve the same rewards distribution we had before given that sessions are pseudorandom?

I wrote this outline back in 5/01 -5/8 and it’s interesting to see how we circled back on all points

This will require the same release cycle that the lite client also goes through btw (maybe even rigorous because it’s consensus changing).

1 Like

Yes if it is pseudorandom it will average out to the same distribution over time :slight_smile: .

Of course everything being released into production needs thoroughly testing!. But testing 5 lines of code (inside a single unit) should be orders of magnitude less effort than a brand new client.

I believe this solution does create an incentive for stake consolidation of nodes, which in turns diminishes infrastructure costs. However there’s a few underlying considerations that I want to highlight as we keep seeking for creative solutions such as this one:

  1. The implementation of a ceiling should never carry state bloat (keeping track of average stake, keeping track of stake age, keeping track of more data on the state), which this solution doesn’t so I believe this is in the right track.

  2. One incentive this creates is that nodes will want to edit their stake every session, which means there will be (by current node count) 48k transactions potentially fighting for block space to edit stake, alongside with claims and proofs, creating block space bloat that we don’t want to see. A viable option to deal with that is that this change comes accompanied by a “reward percentage” field in the Node state data, which would indicate a split between how much of the relay reward goes to the Node Account balance and how much of the relay reward goes directly into the Node stake amount, creating an “auto-compound” feature, which should be enough to counter-balance the before mentioned incentive.

  3. I would also recommend exploring upping the transaction fees for claim and proofs, as well as stake transactions, since now more nodes will be fighting for that block space.

Whenever we explore a solution, we must always take into consideration the side-effects, and even if more mechanisms can be created to combat these side effects, the increase in complexity must also be a factor. With that said, I believe this solution is worth exploring.

3 Likes

I think the auto-compounding feature is probably a great solution from a node-runner-experience point of view.

However, in the spirit of exploring this solution while minimizing side-effects, I think we could leave 15k thresholds in place to more closely mimic the current incentive structure:

15,000 stake => 1x multiplier
29,999 stake => 1x multiplier
30,000 stake => 2x multiplier
44,999 stake => 2x multiplier
45,000 stake => 3x multiplier
… etc

1 Like

I get where you are coming from, but even if we do this there’s no systemic defense stopping node runners to just auto-compound every session, causing the block bloat I mentioned in my previous reply, so we would be counting on good will and everyone running the correct implementation. We have seen in the past non-malicious actors trying to over-optimize and cause strain to the network while at it, so I just wanna make sure that we make the correct trade-offs.

1 Like

What currently stops node runners from doing this?

I don’t think this would create any new incentives for that behavior, would it?

Maybe I don’t understand what you mean by “auto-compound every session”

Currently node runners have no incentive to “Edit Stake” because right now they have equal chance of earning the same reward by just staking close to the minimum as possible, if we implement @Andy-Liquify solution people will have the incentive to “Edit Stake” as soon as they receive their relay reward, causing transaction bloat on the blocks.

I’m suggesting we keep the 15k thresholds to alleviate this concern.

Right now, the network has a 15k threshold before compounding makes sense. We could leave that in place by forcing multipliers to be integers.

What I’m trying to say is that even if we put the 15k thresholds, nothing defends the blockchain from receiving “Edit Stakes” until nodes fulfil the 15k threshold, which means that’s not a sufficient measure to protect the blockchain from bloating with transactions. If we modify the reward lifecycle as I mentioned in my reply to @Andy-Liquify to make this auto-compound part of the protocol, we can avoid the risk of this incentive to bloat the blockchain at a systemic level.

Will they? My suggestion is too floor to 15k windows. This means they will only benefit each 15k extra they stake

1 Like

I’m trying to understand what this changes about a node-runner’s incentive to do this. Here’s my current mental-model, please correct it:

It’s currently possible for nodes to spam the blockchain with “edit stake” transactions (to change service-url, relay chain ids etc…).

There just isn’t an incentive for nodes to do it.

This introduces no new incentives to send an “edit stake” transaction. Once you have 15k POKT accrued, it just changes that transaction from “stake a new node” to “edit the stake of an existing node.”

Caveats: In the initial transition, there will briefly be more edit-stake transactions as node runners consolidate, but that seems like a 1-off concern.

Additionally: node-runners might experiment in the future to find the optimal balance, but again it seems small compared to the spam-style risk you’re describing.

Maybe staking transactions should be more expensive, and maybe it is worth considering, but it seems unaffected by this proposal.

Is your point that it’s easier for operators to say:

if (nodeBalance > 0)
  editStake(prevStake + nodeBalance)

than it is for them to say:

if (nodeBalance > 15000 * 10^6)
  editStake(prevStake + nodeBalance)

So we should increase the tx fee of a stake to explicitly discourage lazy operators?

edit ^^ I keep nit-picking my code lol, I think the idea is apparent so i’ll stop now.

Correct this suggestion won’t add any additional things to state data (just a few additional lines in the function I mentioned)

This is why I suggested binning to 15k so user will only receive increased rewards at each additional 15k staked (up to a defined maximum)

So if there are 15k steps… wouldn’t it be easier for node runners to just automate staking any incoming rewards so once they hit the next 15k they start receiving their rewards immediately?

I think the issue here is there will be an incentive to automate staking increases, even if the reward is delayed until the next 15k step is reached. If the majority of the network automates edit stake then it will create bloat which the network isn’t prepared for.

The issue is the risk of bloat and finding a protection from that from what I’m gathering.

1 Like

Great minds think alike! (and so do ours)

It seems like weighting either rewards or session-selection would have a similar effect, with the tradeoffs being:

Reward scaling:
++ Easier to implement
++ no state bloat
– Wilder variations in short-term earnings (but same avg amount over time)

Session selection:
– Harder to implement
– State bloat / complexity
++ No change to rewards-consistency (but it doesn’t matter in the long run)

So I’m on board with the reward scaling (my condolences to node-services who will have to explain even wilder variations in rewards to every new customer).

1 Like

Would they thought?, I certainly wouldn’t advise people to lock tokens up for 21 days with no benefit.

It makes sense to sweep them to cold wallets (which I know alot of other provides do instantly after a claim anyway)

But I can see where your getting at! It wouldn’t be hard to put barriers in place for this i.e. a stake request will fail if it doesn’t bump up into the next 15k bin

2 Likes