PIP-31: Unleashing the potential of POKT

I’m curious how you think this would work. Below is a screenshot from GANDALF showing what ETH Archival Tracing (0028) was making when I released GANDALF. Not much has changed to what it is today (I just checked).

In a balanced network, there would be 2,589 servers on 0028 and they would be generating .31 POKT per day (with MaxChains at 15 like today). However, 0028 is overstaked at 6,062 servicers (which is 3,473 than it should have), which means the average reward per servicer is .13 POKT.

Should 0028 get increased rewards when it overstaked at 234% (when it should be 100%)? Because 0028 typically has heavier calls, how much more should the rewards be increased to account for their heavy calls?

How About Double

How about double 0028 rewards? If rewards are doubled, then 0028 would generate .26 POKT instead of .13 POKT. We are talking about less than $.08c worth of value a month by “doubling” 0028 rewards. How is that at all worth the DAO’s time and effort, especially since 0028 could be making a whopping .31 POKT per day if c0d3r wasn’t staking over 4k of their nodes on 0028, making it over-provisioned in the first place.

You see how it would be impossible to even work out the rewards for an archival chain like 0028? c0d3r wants to increase the rewards for 0028 because the calls are heavier, which today would benefit c0d3r (despite c0d3r already over staking on 0028), while a provider like POKTScan, who is not over provisioning 0028, would see their rewards across all their chains reduced to pay c0d3r more for the archival they are over-provisioning.

Balance First

Again, what is going through all this worth if the effect is less than $.08c a month? If we first fix MaxChains, then we could talk about archival RTTMs, because then a provider like c0d3r would be heavily incentivized to NOT overstake on a chain like 0028.

If MaxChains was set to 3 and c0d3r was contributing to overstaking on 0028, each of their nodes on 0028 would be making -2.68 POKT per day. That is substantial reason to move to another chain, instead of overstaking.

In reality, because 0028 has such few relays, then most likely small providers would take over 0028 and large providers like c0d3r would focus elsewhere. By trying to do RTTM per chain would only create MORE imbalances, because a provider like c0d3r would be rewarded overstaking, while a provider like POKTScan would have their rewards reduced.

You have to balance the network with GANDALF before doing anything else with chain rewards.

P.S. I’m not suggesting that c0d3r is submitting this proposal to unfairly increase their rewards, I’m just point out the reality of what would happen with 0028, as it was an example given. Only love on this end @bulutcambazi :slightly_smiling_face:

2 Likes

Once the feature ships, we don’t need to change anything. It brings the flexibility but doesn’t require any action. However, I’d recommend:
– Don’t change anything for any of the existing full-node chains. They remain at base 1x multiplier.
– Set 2x multiplier for Archival and Archival Trace nodes. They are indeed at least 2x more costly to run.
– New chains are discussed as they come. RTTM will just be a new part of the proposal.

recognise that this is all as a % of 100.

I understand why you say that. But I disagree with this. I think any chain that is not base multiplier (i.e. 1x) should not be more inflationary than 1x chains. If it costs more to node operators to run those nodes, and if it cost the network more in terms of tokens minted, they should also cost more for portal operators to serve them. Otherwise, it would be unfair.

Please don’t consider the motivation of RTTM change just to be fairer to archival / trace nodes. The real motivation is enabling LLM and other generative AI scenarios. So, a new chain should not get allow-listed at the cost of other chains. This is not a zero sum game, or a limited pie to be shared. We are hoping to expand the pie.

Each relay is 100x to 1000x more costly for a node runner to serve. They should be properly rewarded to be able to serve them. And each relay is also much more valuable than a simple blockchain RPC relay. So, the demand side should pay accordingly.

2 Likes

Shane, I’d keep what GANDALF wants to achieve separate from this proposal. This proposal is about:

  • The principle: Equal rewards for equal value. If a chain brings a larger value, it should be rewarded as such (and also should cost as such at the demand side in terms of burns). Can we agree that equal pay for equal work by itself this is a good and fair thing to have?
    • Incidentally, this helps your cause, too. One of the motivations of GANDALF is supporting the little guy, right? With the right per-chain incentives, the little guy can afford such expensive chains more easily with or without GANDALF. (Which is not a problem for a larger node runner, because they can more easily use economies of scale by spreading the cost)
  • The strategic move: Unleash what POKT can do. Today, pretty much the maximum complexity that POKT can run is archival trace nodes. At current inflation rate and current POKT price, anything beyond that doesn’t make much economic sense. If Pocket ever wants to do more than serving blockchain RPC, it will need higher rewards to compensate the higher costs.
  • The tactical move: We will have a new tool for the demand side and allow-listing some new chains. Imagine there is ABC coin, and they want to be supported by Pocket. Now, they could commit 10M POKT to be burned, and hence, set the rewards at 5X multiplier for as long as their commitment lasts. This can become very handy for new chains that are currently in development but will become online as the bull market approaches.
1 Like

Re Feature 2, I think this can be a useful v0 addition (as opposed to queuing up to add in v1), as this solves a fairly prominent point point re non-custodial. If discussions prove this feature to be less controversial than feature 1, as a v0 consensus-breaking change, you may wish to break this into two separate proposals and give the community the chance to decide on the features separate from each other. (Of course, if both features are approved, they can be developed in tandem into a single build).

Re Feature 1, following are some thoughts in no particular order.

-I agree that per-chain RTTM and GANDALF are separable issues. Whether or not it may be in the best interest of Pocket Netork to reduce MaxChains has no bearing on the utility of per-chain RTTM (addressing this comment more to @shane than the present authors)

-While per-chain RTTM is a utility that I think POKT needs, I am not convinced the utility is needed in v0, Why not wait until v1, where it an be added as part of a holistic approach to mint and burn? By holistic approach, I mean, in addition to differentiated handling of chains, also the differentiated handling of geozone (some geozones are more expensive to operate in than others) and call type (some calls are more expensive to service than others)

-In your replies you allude to the need to adjust fees on the demand side in tandem with adjusting RRTM multiplier. This is the correct approach. However, there is nothing in the proposal that i can see that addresses this aspect. This is another reason I lean toward waiting until v1 where the demand and supply side can be worked out synergistically. Per chain RTTM implies per chain burn once burn turns on if we want to keep mint bounded, as your comments imply.

-Re auth/FeeMultipliers : I think multipliers for transacion fees per message type are completely orthogonal to the discussion at hand and not useful as a vehicle to accomplish your purposes (e.g., 0.01 POKT for one message type vs 0.02 for another message type)

Re @cryptocorn comment “recognize that this is all as a % of 100.” - I suggest you go back and re-engage this point; i was not satisfied by your response. We have a long way to go before mint< burn; for the time being they are fairly decoupled and from a system management point of view, it makes sense to have a target system-average RTTM and a multiplier across chains than ~averages to 1. That being said, from the point of view of maturity where mint is all “paid for” by clients via burn, I understand your response. Note that it is app burn (whether self-stake being burned or portal stake being burned which in turn is billed to portal clients), that is the ultimate balancer to mint. The current Gateway fee is just a precursor to this eventual protocol-level fee collection mechanism.

-Apart from the discussion of implement in v0 or wait until v1, I see no reason that approving and implementing the code change to allow per-chain RTTM is dependent on first coming up with the best way to determine parameter values. There is precedence for approving a code change first and deferring the mechanism used to alter parameter values away from a default set.

-That being said, I foresee there possibly arising a dynamic pricing feedback mechanism… too little coverage of a chain (or geozone - see previous comment) feeds into increasing the RTTM on a chain; whereas being overprovisioned feeds into decreasing the RTTM on a chain. This can be automated so that there is very little manual work needed to set values.

2 Likes

If we are talking about rewards per chain, then we have to talk about rewards per chain as a whole. Trying to limit the conversation about rewards per chain to only talking about RTTMs and ignoring MaxChains is like trying to fix a broken engine with new tires.

Farther down in your comment, you again say :point_down:

Again, you believe that this is a means to achieve proper incentive per chain “without GANDALF”. Clearly GANDALF is related, and this is being suggested as an alternative, so trying to separate them is impossible.

I don’t disagree with the per chain RTTM concept and have supported it in the past. The issue is you are trying to introduce it into a system that is rigged to only benefit the larger providers. It akin to trying establish “equality” into a system that at it’s foundation is rigged for those with infrastructure power.

Seeing that the principle of equality is part of your motives here, then I invite you to first join in fixing a more foundational root issue that has crushed the little guy, which is MaxChains. Instead of the DAO figuring out the “right per-chain incentive”, we de-rig the system from requiring someone to run 15 chains to get network average. Make it so anyone with a chain node can participate in POKT, instead of requiring them run 14 other chains as well. It is a no-brainer for those who want little guys to re-join the POKT ecosystem.

You are mistaken in believing that RTTM per chain will have meaningful impact on the little guy. If we look at the data today, by doubling archival chains, as you have suggested, for a chain like BSCA it would generate a whopping $.26c more per month for each of the 745 servicers. This means that someone is supposed to run a BSCA for $.52c per month. This means that BSCA as a whole generates $391.87 which is spread across a few node runners.

According to the numbers though, larger enterprises like yourself would benefit the most. By doubling ETH Archival Tracing it would go from $482.75 to $965.50, and since c0d3r has the majority of ETH Archival Tracing, most of that $482.75 would go to you. Those with larger amount amounts of node, like yourself, would objectively benefit the most… so it is a misunderstanding to suggest it benefits the little guys.

Benefit The Little Guy

Instead of increasing the rewards for archival chains by a few cents each month, reducing MaxChains would reduce the cost of infrastructure for small node runners by hundreds or thousands of dollars.

Example: Take a node runner with 10 POKT nodes. What would be better?

  1. Increasing the monthly reward for BSCA from $.26c to $.52c per month (while still requiring them to run 14 other chains)
  2. Allow them to generate full rewards on 1 chain, and save $100s by shutting down infra for 14 chains.

Objectively #2 is better. If you were to first Reduce MaxChains to 1, then double rewards with dynamic RTTM, and the reward bump would be more meaninful $11.50 to $23.

When it comes to the principle of meaningfully bring equality to node running, everything objectively first starts with MaxChains.

Again, I agree with the notion, but we are again trying to put the cart before the horse. You want to start preparing POKT for “more than serving blockchain RPC”, but that can’t happen until there is balance. You mention using POKT for AI API calls… that is awesome, but someone running the AI would also have to run 14 chain nodes. Nothing makes sense until someone can participate in POKT with 1 data source.

Yes, let’s prepare for AI API, by making it possible to run 1 data source per node (not 15), then we can balance out the rewards per source. I would be happy to work with you to make this happen.

I don’t agree this is the best tatical move. If you want to add new chains, and enable chain adoption, then the best thing to do would be to allow the ABC chain community to join POKT and generate meaningful rewards by serving only ABC requests on POKT. The way you are looking at creates incentive for existing providers, like yourself, to run unprofitable chains. In reality though, for a new chain like ABC, there are likely folks in that community that already run ABC nodes and would join POKT to further monetize their existing nodes.

When we first started adding new chains, this marketing strategy worked. POKT had multiple marketing opportunities directly with foundations (like AVAX and BSC) to promote their chain launch on POKT and encouraging their community to monetize their nodes on POKT. It worked in the past and could easily work again for new chain if POKT was back to being a way for anyone to generate meaningful rewards from a SINGLE data source.

Etherum’s mining wasn’t structured so that you had to have 15 video cards to generate average rewards per hash power… you just needed 1. POKT is fundamentally backwards from every other project in crypto.

The system is currently rigged so that only those with the infra to support 15 chains can generate network average rewards. Trying to address these issues with per chain RTTM doesn’t address any core problems, but instead makes it so that those with the most influence over the DAO would benefit the most from per chain weighting. You still have not addressed how this would NOT lead to bureaucracy mayhem.

1 Like

Thanks for the response, I agree that using the auth/FeeMultipliers for two purposes would be confusing. Thanks for clarifying.

Just to confirm, is this correct?

I’m specially curious about the delegators, any address can be assigned as a delegator?


I think that this is the real deal, could you share if you have some development on this subject?
While IA-related RPCs are not much different than blockchain RPCs, there are many important details around them.


The Balance

@shane @Cryptocorn and everyone else feeding into the subject of balancing.
I don’t believe that it is important to discuss here. This patch introduces tools that are needed, but it does not introduce any parameter change. This means that before and after implementing this change the rewards per chain will remain exactly the same. We can then start discussions on a per-chain basis and vote on the changes, on other threads.

The balance that you want to achieve is not feasible by the PNF, it must be automated. Also, it should (and can) be agnostic of the number of chains per node. We have developed such a method, based on the entropy of the staked nodes by chain, you can read about it in this thread (V1 fairness). The method does exactly that, modifies the RTTMs based on the staked nodes per chain, I encourage you to discuss the automated balancing there.

I’m not trying to censor the discussion, but it is completely irrelevant to this patch IMO. We should keep this clean and discuss the functionalities that are being proposed, not the effects of using them in a way or an other.

2 Likes

You see value in creating the tool, even if there isn’t a plan on how to use it yet. I don’t agree with that approach.

If you are going to create a tool, then it needs to already have a path to usage. Without laying out how this tool will be used, then it… :point_down:

I also don’t believe this tool is needed in v0 unless it has a plan for using it. I think everyone agrees to having this kind of functionality in v1… the v0 argument hasn’t been made. Unless we are going to be unleashing AI APIs before v1, then I don’t why it should be a v0 priority, when there are other more pressing v0 priorities.

If the goal really is to build this tool in v0 (which I’ve heard will come with a development cost) and not have a plan for how it will be used… that makes no sense to me… especially when there is concerns on how the usage will look. I’ve made clear arguments about why I don’t believe it will have the intended effects, and these effects are the motive to building the tool in the first place.

I believe we may just see this differently.

1 Like

Re “Balance”, I agree with @RawthiL that the mechanics of how to accomplish balance can be worked out in another thread, and in particular the v1 Fairness thread alluded to. However, I would think that hashing out the implications of this proposal (coupled with a PUP setting non-default values) on aggregate mint rate, inflation, potential reduction of rewards on some chains while raising rewards on others, etc. does belong squarely with this proposal.

The community has gone out on a huge limb to slash provider rewards to a level such that POKT can claim sub 5% inflation. Taking action that can muddy the waters of this narrative that everyone has made sacrifices to achieve may not be the most prudent. This is what I fear would happen if we were to pass this proposal in tandem with any mapping that does not ensure no increase of aggregate mint, On the other hand, a mapping that does ensure no growth in aggregate mint will, in most cases, come at the expense of decrease of mint on main chains, which is a hard pill to swallow for node runners not focused on archive or specialty chains

IMO: Regarding use cases involving archival chains etc that may call for/benefit from a ~2x bump in RTTM compared to system average, simply is not worth pursuing in v0, but absolutely should be part of the v1 plan.

That leaves LLMs. This is an exciting area to pursue. If there is bona fide opportunity to capture real revenue-generating business in this arena in the pre-v1 time period, then I absolutely think this proposal to enable 10x or 100x or whatever rewards needed to entice node runners to support such chains should go forward. BUT… it should presented in collaboration of at least one v0 Gateway provider who will sign on to collecting sufficient revenue and sending sufficient POKT per relay to PNF for burning for such chains so as to fully pay for the service (or at a minimum have a mint/burn gap that is no wider that that of other chains since we are still in a subsidy period). This solves the “balance” problem allowing aggregate mint to remain bounded to <5% inflation while not decreasing the rewards on other chains.

1 Like

I think that this is the real deal, could you share if you have some development on this subject?
While IA-related RPCs are not much different than blockchain RPCs, there are many important details around them.

Glad you asked @RawthiL ! Take a look here: Llama Demo over Pocket
This is an LLM implementation that runs on Pocket Network (private testnet) end to end. Imagine we can have this in Pocket Network in just a few weeks. This type of gen-AI queries take up to 15 seconds (even on a high-power GPU machine), hence they cost WAY more. If we want to enable such advanced scenarios, we need to be flexible with how we reward the nodes running them so they can be feasible.

Also I want to remind that actually bringing up these scenarios take time, I would love to be ready in time for when v1 ships?

That being said, I foresee there possibly arising a dynamic pricing feedback mechanism… too little coverage of a chain (or geozone - see previous comment) feeds into increasing the RTTM on a chain; whereas being overprovisioned feeds into decreasing the RTTM on a chain. This can be automated so that there is very little manual work needed to set values.

This is an interesting idea @msa6867 and could be a way to achieve what Gandalf wants to achieve in a different way. But there would still need a way to account for fundamentally different cost (sometimes 1000x) of running chains. In any case, success of one chain should not come at expense of another chain. New chains don’t share a pie, they grow the pie!

The issue is you are trying to introduce it into a system that is rigged to only benefit the larger providers.

@shane I don’t share your adversarial view.

  • It is not rigged, larger node runners are a necessity of current economic climate. On average, a 60k node makes barely $8 dollars a month. Even if the gateways were free, $8 wouldn’t pay the electricity bill of the computer it runs on (0.4kW x 24hrs x 30 days x $0.12 = $35) You need a large number of them to make your while worth. At which point, you need at least part time human attention to take care of them, so they are healthy and performant. And let’s not forget, performant gateways (needed for cherry picker) are many times more costly than your pocket node. At which point, you are not the little guy anymore.
  • This proposal wouldn’t benefit only the large node runners. In fact the opposite: A single person could be running a very sophisticated (and highly rewarding) chain, and be profitable, if we allow such chains. Imagine it would be possible to make 10000 pokt per day on a single machine, because what it offers is so valuable. New type of more agile node runners could show up, instead of running old-school chains, they disrupt the big guys with innovative chains.
  • This doesn’t further unbalance chains. On average, Eth Archival Trace makes 0.066 POKT per day. That is $0.0495 per month. 5… cents… per… month… before rewards share! Doubling it is not a decision maker for any node runner. You know what is a decision maker these days, when most node runners are barely (if at all) profitable? Cost. Node runners are overprovisioned in some chains because they simply cannot afford even more chains (both in hardware costs and personnel costs) so that they are better balanced. So, I am not proposing this so large node runners can nefariously make a few more cents per month per node.
  • If Gandalf passes, and all the chains are balanced, they will all have the same revenue, so everyone will want to run the cheapest chains. Don’t you want to give any incentive for unlucky ones running some of those more expensive chains for the same balanced overall rewards?

This proposal is about enabling some new scenarios, and to give some new tools to both supply side and demand side. It is not about balancing (although it can help), and it can live in harmony if they both were to pass. So, can we please take Gandalf to its own post?

Regarding the “bureaucracy mayhem”. I don’t see it. A cost factor will be a part of a chain proposal, and the onus will be on the proposers to justify any deviation from today’s defaults. It is not like we have exciting chain event every day anyways. What happens for existing chains can also be a one-time vote.

1 Like

Your first mentioned this proposal being for the little guys. They do exists and there are still some folks that run POKT nodes from their homes, mostly using Node Pilot, though typically well below network average. GANDALF starts making the network conducive to such node setups, and this proposal was presented as a way to help out them out as well… hence why I took time to reason out how that would not be the case.

GANDALF is stalled at the moment because it either takes gateways to change their cherry picker parameters to check how many chains a node is staked with, or it takes network upgrade change where the network checks if the number of chains a node is staked on equals the allowed value before the node is put in a session.

So far PNI and PNF have been absent from the conversation, so there isn’t a clear path forward. Honestly, if you wanted to add MaxChains checking to sessions as part of this proposal, so that MaxChains can be adjusted, then support would make more sense, as it would actually touch on much of POKT economic needs IMO. We would start fixing the actual economic engine of POKT.

That 10,000 POKT going to a single machine is coming out of the 220k being minted per day via ARR. So by having a single machine generating 4.5% of POKT’s total daily reward (which is $2,880 a month), the rewards on all the other RelayChainIDs get dramatically reduced.

Those without the hardware for LLMs (and 14 other chains) see their rewards dramatically reduced, while the LLM providers generate huge portion of POKT total rewards. Example :point_down:

This Proposal + POKT LLM

Lets say this proposal passes, and POKT LLM is launched given a 150x RTTM weight since it’s calls are 150x more resources (15s/.1s).

If this POKT LLM gets a low 2M relays a day, since it has a 150x weight, it actually has the effect of 300M relays being added to the network.

If 300M equivalent relays were added to the network via POKT LLM, then every other RelayChainID’s reward would be reduced by 20%. This is because ARR only allows 220k POKT to be minted per day, so POKT LLM would take 20% of that mint due to it’s weight.

This means that everyone who does not have the infra to support POKT LLM would immediately have their rewards cut by 20% immediately. Only those with the hardware chops to support POKT LLM will get the $40k being produced each month.

This 20% shift of the rewards to LLM providers, would only create $1.7 a day in protocol burn revenue (GatewayFeePerRelay). Is $255 of monthly burn worth shifting $40k of rewards to those with LLM capabilities? I’m not sure… as technically POKT LLM should be generating $7,650 if it’s GatewayFeePerRelay was properly weighted.

To make the whole network benefit from POKT LLM, from my perspective, there would need to be a dynamic GatewayFeePerRelay per RelayChainID to be a part of this proposal. The more folks that use the more expensive POKT LLM, the more POKT is minted, without compromising POKT’s desire to be low inflation and eventually deflationary. I already modeled out an ARR that is not a flat 220k per day, but connected to what is being burned, so technically most of the economic work for this is already done in Burn And 🥩 Harnessing (BASH) Deflation Economic Model.

Prepare POKT For POKT LLM

I can’t say this enough, I support this kind of move. Not entirely sure if it works for v0, but I’m more than willing to consider it… so thank-you for bringing this forward.

To actually make POKT v0 conducive to this kind of fundamental change, then I see this proposal getting us 33% of the way there in it’s current form. To make this move towards serving POKT LLM, in a manner that does NOT further gut rewards for those without LLM hardware, and produces fair protocol revenue (burn), then personally believe this proposal needs to address:

  1. MaxChains fix (balance the network without the need for complex DAO levers)
  2. Dynamic GatewayFeePerRelay per chain (so the protocol can charge LLM RelayChainIDs fairly)
  3. Accompanying replacement to ARR to account for chain specific GatewayFeePerRelay per chain (as mentioned before, I’ve already modeled out an ARR alternative that connects inflation to the burn rate, so I would consider this already done).

To launch this in v1 makes TONs of sense because most of this is already being planned or can be incorporated… but open to v0 if it makes sense (hence why I’m putting time into flushing this out).

Not trying to be adversarial, just being pragmatic with the economic effects of such a move and the wider ramifications :sweat_smile:

Thank-you for all your response :slightly_smiling_face:

What are the hardware requirements for this? What kind of GPU’s would this require? This could be a very cool way to get old miners (like myself) putting GPU’s to use… so more info on all would be required of folks to join POKT LLM would be super helpful with all this decision making.

Big fan of the work overall :slightly_smiling_face:

1 Like

Thanks for taking the time and sharing your thoughts.

  1. MaxChains fix (balance the network without the need for complex DAO levers)

MaxChains is orthogonal to what we discuss. This proposal is not related to Gandalf. If/when Gandalf passes, we’ll be happy to help designing and making the changes. Until then, let’s keep them separate please.

… then every other RelayChainID’s reward would be reduced by 20%
2. Dynamic GatewayFeePerRelay per chain (so the protocol can charge LLM RelayChainIDs fairly)

We want to grow the pie, not to find new ways to share it, and in particular, we have no intention to make anyone’s piece smaller. Therefore, the intention is NOT to get the rewards at the expense of any other current chains. I think #2 is a good idea, and we can certainly do it in the scope of this project. My assumption, perhaps wrongly, was doing it off the chain. But I think you are right, it is indeed better to do it on the chain.

  1. Accompanying replacement to ARR to account for chain specific GatewayFeePerRelay per chain (as mentioned before, I’ve already modeled out an ARR alternative that connects inflation to the burn rate, so I would consider this already done).

ARR limits inflation, not the total volume. So, the good news is that it won’t need to change at all. Any adjustment to RTTM will simply need to specify a convincing GatewayFeePerRelay as well.

1 Like

If the focus for this proposal is focused around trying to support novel RelayChainIDs, like the POKT LLM (which is what this conversation as shifted towards.), then GANDALF wouldn’t as directly apply. Use this proposal to balance rewards across current chains (which keeps being suggested) then we do need to consider the entire reward per chain economic system.

:+1:

GatewayFeePerRelay is currently off-chain, and was universally set by ARR. It may not need to be on-chain, but it would need to be in ARR (with a method/system to adjust) or in an ARR replacement.

1 Like

I think that if this is implemented and a chain RTTM is changes, then the GFPR for that chain should be changed proportionally. The DAO should not help the gateways operations by charging 1x for a chain that is paid x5. I think that this is not an issue as it can be simple resolved by setting up an increased burn to the increased mint.

Regarding LLMs (or any new chain for what it matters), the process is the same, set-up the chain, vote on the new chain price (if needed), set GFPR according, and that’s it. The only different from current whitelisting of chains, is that a price should be agreed on (if needed).

And this is why this matters. There is A LOT to discuss around this, but without this upgrade we cannot move forward into thinking about this.


I think that we are over-thinking a simple a needed change, other protocols (like LAVA) already have a pre-chain multiplier. It is not a wild idea, the subject of setting different prices per chain have been discussed multiple times. Now that we have a PR that give us this feature we should be focused reviewing that, making sure it is safe and it runs on the test net. All the discussions of probable scenarios that can arise due to having this feature is not pertinent to the feature itself. We can just decide that this PR is applied without changing anything (all RTTMs are the same) and then sit down and discuss.

2 Likes

Thanks for putting up an easy to understand example. Two thoughts:

  1. What about adding an additional constant that addresses under/over provision of chains? So: RTTM * RTTMM (ex. ‘2’ for archival) * CP (Chain Provision, for0028 it would be something like 0.3)

So you could automatically adjust rewards to both fairly compensate more computationally expensive work AND incentivize efficient chain provision?

  1. As a layman - Adding ‘AI’, in any capacity to Pocket has to be a good thing and a major marketing tool. If Pocket can in theory service LLMs and then do a press release that we now service AI, even if the true situation is more complicated, I think this helps with both retail + institutional investor interest and can be considered as a much a marketing expense as a building or tokenomics expense.
1 Like

Governance Transactions Formatting

With the new RTTMmap parameter, it’s not clear to me how transactions would be submitted to update the parameter value.

For example, the way that the SupportedBlockchains parameter is updated is as follows:

pocket gov change_param a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4 mainnet pocketcore/SupportedBlockchains “[“0001”,“0003”,“0004”,“0005”,“0006”,“0009”,“000A”,“000B”,“000C”,“000F”,“0010”,“0012”,“0021”,“0022”,“0024”,“0025”,“0026”,“0027”,“0028”,“0040”,“0044”,“0046”,“0047”,“0048”,“0049”,“0050”,“0051”,“0052”,“0053”,“0054”,“0056”,“0057”,“0058”,“0059”,“0060”,“0061”,“0063”,“0065”,“0066”,“0067”,“0068”,“0069”,“0070”,“0071”,“0072”,“0074”,“0075”,“0076”,“0077”,“0078”,“0079”,“0080”,“0081”,“03DF”]”

The new RTTMmap parameter should work similarly, in that PNF only has to submit one transaction in order to modify an array of values.

Could you indicate how such a governance transaction would be formatted for the RTTMmap?

Has your local devnet testing included submitting governance transactions to update the parameter?

ARR Considerations

The per-chain RelaysToTokensMultiplier (RTTM) significantly complicates emission control policies, like the currently active ARR.

ARR operates on the assumption that there is a single RTTM that generates a given mint for a given relay count. Since we know that the same multiplier was applied to all relays, we can make weekly adjustments that will average out to the target daily minting. See this spreadsheet to see this in action.

Introducing chain-specific multipliers means that we can no longer assume that every relay will be multiplied by the same value. We can’t predict how many relays each chain will process either, since it’s an emergent product of demand. Since each chain may have different multipliers, and we can’t predict relays-per-chain, this seems to break the existing methodology for emission controls.

I’m basing this analysis on this logic outlined in the proposal:

When a node mints relay rewards, it looks up the pos/RelaysToTokensMultiplierMap, and the relayed chain is defined there, it adopts a custom multiplier for that chain to calculate the amount of relay rewards, otherwise it adopts the default multiplier of pos/RelaysToTokensMultiplier.

ARR modifies the RTTM, not the RTTMmap, and since the RTTMmap overrides the RTTM if a value is set, ARR is no longer controlling emissions for those chains.

That said, this proposal does not suggest any parameter values for the new RTTMmap, so it will not break ARR until a PUP is approved to modify the multiplier for a specific chain. We could therefore think of this PIP as a decision to grant ourselves the option to activate this feature, rather than an implicit approval of any RTTMmap values, as msa alludes to:

This PIP can proceed without answering the ARR question if we’re considering it under this framing. However, any subsequent PUP to set a RTTMmap value should be coupled with a plan to adapt or replace ARR.

The proposal authors have suggested this could be addressed by increasing the GatewayFeePerRelay for specific chains in the same proportion. However, it should be noted that GatewayFeePerRelay is not yet offsetting today’s mint rate (i.e. burn ≠ mint), so scaling the burn in proportion is not a solution by itself. This would have to be part of a broader strategy to set mint = burn, which is not viable yet at current daily relay counts.

Implementation Details and Moving to a Vote

Before moving to a vote, the Implementation section should be filled out as follows, to follow the standard for previous protocol upgrades (note: I’m assuming we’ll adopt RC-0.11.0 as the version number for this potential upgrade):

  • If this proposal is approved, and after testing has taken place for BETA-0.11.0 on Testnet, RC-0.11.0 will be published for node runner adoption. Anyone can monitor node runner adoption of RC-0.11.0 using the Servicers and Validators version pie charts displayed here or the equivalent “Staked Tokens by Node Version” chart displayed here.
  • Once ≥67% of validator power has updated to this version, an upgrade height will be selected by the Foundation Directors based on the preferences expressed by node runners (e.g. through Discord polls) and they will pass this height to the chain through the pocket gov upgrade transaction.
  • The upgrade height chosen by the Foundation Directors will be communicated in advance to ensure node runners are prepared to react if necessary. A time will be targeted that maximizes the availability of node runners, accounting for all time zones and working hours.
  • Once the network is upgraded, each of the new features will be enabled by the Foundation submitting the pocket gov enable txs and working with the protocol developers to ensure there are no issues.
  • For avoidance of doubt, this PIP does not approve any values for the new RelaysToTokensMultiplierMap parameter, which means the global RelaysToTokensMultiplier will continue to apply to all chains until a subsequent PUP is approved modifying the new RelaysToTokensMultiplierMap parameter.
2 Likes

Thank you for your questions and comments!

Could you indicate how such a governance transaction would be formatted for the RTTMmap?

Here’s an example command to submit a tx.

$ export POKT=<path to pocket>
$ export DAO=<DAO address>
$ export NETWORK=mainnet
$ $POKT gov change_param $DAO $NETWORK pos/RelaysToTokensMultiplierMap '{"0001":"12345","03DF":"42","0028":"3141592"}' 10000

Once it’s accepted by the network, the new parameter looks like this.

$ $POKT query node-params
2023/09/19 11:30:20 Initializing Pocket Datadir
2023/09/19 11:30:20 datadir = /home/john/.pocket
http://localhost:8082/v1/query/nodeparams
{
    "dao_allocation": "10",
    "downtime_jail_duration": "3600000000000",
    "max_evidence_age": "120000000000",
    "max_jailed_blocks": "37960",
    "max_validators": "5",
    "maximum_chains": "20",
    "min_signed_per_window": "0.600000000000000000",
    "proposer_allocation": "1",
    "relays_to_tokens_multiplier": "8461",
    "relays_to_tokens_multiplier_map": {
        "0001": "12345",
        "0028": "3141592",
        "03DF": "42"
    },
    "servicer_stake_floor_multipler": "15000000000",
    "servicer_stake_floor_multiplier_exponent": "1.000000000000000000",
    "servicer_stake_weight_ceiling": "15000000000",
    "servicer_stake_weight_multipler": "1.000000000000000000",
    "session_block_frequency": "4",
    "signed_blocks_window": "10",
    "slash_fraction_double_sign": "0.000001000000000000",
    "slash_fraction_downtime": "0.000001000000000000",
    "stake_denom": "upokt",
    "stake_minimum": "15000000000",
    "unstaking_time": "300000000000"
}

Has your local devnet testing included submitting governance transactions to update the parameter?

Yes, we conducted E2E testing on our devnet that includes:

  • submit gov transactions
  • keep sending relays throughout a test (before/after gov transactions) and wait until relay rewards are minted
  • verify the amount of rewards

Before moving to a vote, the Implementation section should be filled out as follows, to follow the standard for previous protocol upgrades (note: I’m assuming we’ll adopt RC-0.11.0 as the version number for this potential upgrade):

We’ll update the post.

2 Likes

This was a helpful example, thanks. It seems easy enough to format and submit changes to multiple chains at once.

1 Like

This proposal is now available for voting at Snapshot

1 Like

In the future, I’d like to ask that proposals be labeled in factual terms, instead of subjective framing. The DAO voting page shouldn’t be used to frame the proposal itself IMO.

3 Likes

This current proposal feels like déjà vu – as if we’ve seen this movie before, Stake Weighting 2.0.

I might be a bit late to the party, but I believe this was a golden opportunity to make necessary changes for GANDALF. I fully support the idea of having higher RTTM for different chain IDs and regions, something I’ve always advocated for. However, I can only see two viable ways to implement this while preventing abuse. Definitely not in the current protocol’s form.

  1. Independent staking for each chain ID.
    or
  2. One staked Pocket node for one Chain.

I don’t believe the DAO should vote on a feature without first assessing its potential impact on the network. This statement appears counterintuitive. I see the potential here for a provider, like c0d3r or any other, to run archival nodes and stake only trace/archival nodes to maximize rewards. Indies won’t run archival nodes due to resource constraints.

1 Like