PIP-31: Unleashing the potential of POKT

If we are talking about rewards per chain, then we have to talk about rewards per chain as a whole. Trying to limit the conversation about rewards per chain to only talking about RTTMs and ignoring MaxChains is like trying to fix a broken engine with new tires.

Farther down in your comment, you again say :point_down:

Again, you believe that this is a means to achieve proper incentive per chain “without GANDALF”. Clearly GANDALF is related, and this is being suggested as an alternative, so trying to separate them is impossible.

I don’t disagree with the per chain RTTM concept and have supported it in the past. The issue is you are trying to introduce it into a system that is rigged to only benefit the larger providers. It akin to trying establish “equality” into a system that at it’s foundation is rigged for those with infrastructure power.

Seeing that the principle of equality is part of your motives here, then I invite you to first join in fixing a more foundational root issue that has crushed the little guy, which is MaxChains. Instead of the DAO figuring out the “right per-chain incentive”, we de-rig the system from requiring someone to run 15 chains to get network average. Make it so anyone with a chain node can participate in POKT, instead of requiring them run 14 other chains as well. It is a no-brainer for those who want little guys to re-join the POKT ecosystem.

You are mistaken in believing that RTTM per chain will have meaningful impact on the little guy. If we look at the data today, by doubling archival chains, as you have suggested, for a chain like BSCA it would generate a whopping $.26c more per month for each of the 745 servicers. This means that someone is supposed to run a BSCA for $.52c per month. This means that BSCA as a whole generates $391.87 which is spread across a few node runners.

According to the numbers though, larger enterprises like yourself would benefit the most. By doubling ETH Archival Tracing it would go from $482.75 to $965.50, and since c0d3r has the majority of ETH Archival Tracing, most of that $482.75 would go to you. Those with larger amount amounts of node, like yourself, would objectively benefit the most… so it is a misunderstanding to suggest it benefits the little guys.

Benefit The Little Guy

Instead of increasing the rewards for archival chains by a few cents each month, reducing MaxChains would reduce the cost of infrastructure for small node runners by hundreds or thousands of dollars.

Example: Take a node runner with 10 POKT nodes. What would be better?

  1. Increasing the monthly reward for BSCA from $.26c to $.52c per month (while still requiring them to run 14 other chains)
  2. Allow them to generate full rewards on 1 chain, and save $100s by shutting down infra for 14 chains.

Objectively #2 is better. If you were to first Reduce MaxChains to 1, then double rewards with dynamic RTTM, and the reward bump would be more meaninful $11.50 to $23.

When it comes to the principle of meaningfully bring equality to node running, everything objectively first starts with MaxChains.

Again, I agree with the notion, but we are again trying to put the cart before the horse. You want to start preparing POKT for “more than serving blockchain RPC”, but that can’t happen until there is balance. You mention using POKT for AI API calls… that is awesome, but someone running the AI would also have to run 14 chain nodes. Nothing makes sense until someone can participate in POKT with 1 data source.

Yes, let’s prepare for AI API, by making it possible to run 1 data source per node (not 15), then we can balance out the rewards per source. I would be happy to work with you to make this happen.

I don’t agree this is the best tatical move. If you want to add new chains, and enable chain adoption, then the best thing to do would be to allow the ABC chain community to join POKT and generate meaningful rewards by serving only ABC requests on POKT. The way you are looking at creates incentive for existing providers, like yourself, to run unprofitable chains. In reality though, for a new chain like ABC, there are likely folks in that community that already run ABC nodes and would join POKT to further monetize their existing nodes.

When we first started adding new chains, this marketing strategy worked. POKT had multiple marketing opportunities directly with foundations (like AVAX and BSC) to promote their chain launch on POKT and encouraging their community to monetize their nodes on POKT. It worked in the past and could easily work again for new chain if POKT was back to being a way for anyone to generate meaningful rewards from a SINGLE data source.

Etherum’s mining wasn’t structured so that you had to have 15 video cards to generate average rewards per hash power… you just needed 1. POKT is fundamentally backwards from every other project in crypto.

The system is currently rigged so that only those with the infra to support 15 chains can generate network average rewards. Trying to address these issues with per chain RTTM doesn’t address any core problems, but instead makes it so that those with the most influence over the DAO would benefit the most from per chain weighting. You still have not addressed how this would NOT lead to bureaucracy mayhem.

1 Like

Thanks for the response, I agree that using the auth/FeeMultipliers for two purposes would be confusing. Thanks for clarifying.

Just to confirm, is this correct?

I’m specially curious about the delegators, any address can be assigned as a delegator?

I think that this is the real deal, could you share if you have some development on this subject?
While IA-related RPCs are not much different than blockchain RPCs, there are many important details around them.

The Balance

@shane @Cryptocorn and everyone else feeding into the subject of balancing.
I don’t believe that it is important to discuss here. This patch introduces tools that are needed, but it does not introduce any parameter change. This means that before and after implementing this change the rewards per chain will remain exactly the same. We can then start discussions on a per-chain basis and vote on the changes, on other threads.

The balance that you want to achieve is not feasible by the PNF, it must be automated. Also, it should (and can) be agnostic of the number of chains per node. We have developed such a method, based on the entropy of the staked nodes by chain, you can read about it in this thread (V1 fairness). The method does exactly that, modifies the RTTMs based on the staked nodes per chain, I encourage you to discuss the automated balancing there.

I’m not trying to censor the discussion, but it is completely irrelevant to this patch IMO. We should keep this clean and discuss the functionalities that are being proposed, not the effects of using them in a way or an other.


You see value in creating the tool, even if there isn’t a plan on how to use it yet. I don’t agree with that approach.

If you are going to create a tool, then it needs to already have a path to usage. Without laying out how this tool will be used, then it… :point_down:

I also don’t believe this tool is needed in v0 unless it has a plan for using it. I think everyone agrees to having this kind of functionality in v1… the v0 argument hasn’t been made. Unless we are going to be unleashing AI APIs before v1, then I don’t why it should be a v0 priority, when there are other more pressing v0 priorities.

If the goal really is to build this tool in v0 (which I’ve heard will come with a development cost) and not have a plan for how it will be used… that makes no sense to me… especially when there is concerns on how the usage will look. I’ve made clear arguments about why I don’t believe it will have the intended effects, and these effects are the motive to building the tool in the first place.

I believe we may just see this differently.

1 Like

Re “Balance”, I agree with @RawthiL that the mechanics of how to accomplish balance can be worked out in another thread, and in particular the v1 Fairness thread alluded to. However, I would think that hashing out the implications of this proposal (coupled with a PUP setting non-default values) on aggregate mint rate, inflation, potential reduction of rewards on some chains while raising rewards on others, etc. does belong squarely with this proposal.

The community has gone out on a huge limb to slash provider rewards to a level such that POKT can claim sub 5% inflation. Taking action that can muddy the waters of this narrative that everyone has made sacrifices to achieve may not be the most prudent. This is what I fear would happen if we were to pass this proposal in tandem with any mapping that does not ensure no increase of aggregate mint, On the other hand, a mapping that does ensure no growth in aggregate mint will, in most cases, come at the expense of decrease of mint on main chains, which is a hard pill to swallow for node runners not focused on archive or specialty chains

IMO: Regarding use cases involving archival chains etc that may call for/benefit from a ~2x bump in RTTM compared to system average, simply is not worth pursuing in v0, but absolutely should be part of the v1 plan.

That leaves LLMs. This is an exciting area to pursue. If there is bona fide opportunity to capture real revenue-generating business in this arena in the pre-v1 time period, then I absolutely think this proposal to enable 10x or 100x or whatever rewards needed to entice node runners to support such chains should go forward. BUT… it should presented in collaboration of at least one v0 Gateway provider who will sign on to collecting sufficient revenue and sending sufficient POKT per relay to PNF for burning for such chains so as to fully pay for the service (or at a minimum have a mint/burn gap that is no wider that that of other chains since we are still in a subsidy period). This solves the “balance” problem allowing aggregate mint to remain bounded to <5% inflation while not decreasing the rewards on other chains.

1 Like

I think that this is the real deal, could you share if you have some development on this subject?
While IA-related RPCs are not much different than blockchain RPCs, there are many important details around them.

Glad you asked @RawthiL ! Take a look here: Llama Demo over Pocket
This is an LLM implementation that runs on Pocket Network (private testnet) end to end. Imagine we can have this in Pocket Network in just a few weeks. This type of gen-AI queries take up to 15 seconds (even on a high-power GPU machine), hence they cost WAY more. If we want to enable such advanced scenarios, we need to be flexible with how we reward the nodes running them so they can be feasible.

Also I want to remind that actually bringing up these scenarios take time, I would love to be ready in time for when v1 ships?

That being said, I foresee there possibly arising a dynamic pricing feedback mechanism… too little coverage of a chain (or geozone - see previous comment) feeds into increasing the RTTM on a chain; whereas being overprovisioned feeds into decreasing the RTTM on a chain. This can be automated so that there is very little manual work needed to set values.

This is an interesting idea @msa6867 and could be a way to achieve what Gandalf wants to achieve in a different way. But there would still need a way to account for fundamentally different cost (sometimes 1000x) of running chains. In any case, success of one chain should not come at expense of another chain. New chains don’t share a pie, they grow the pie!

The issue is you are trying to introduce it into a system that is rigged to only benefit the larger providers.

@shane I don’t share your adversarial view.

  • It is not rigged, larger node runners are a necessity of current economic climate. On average, a 60k node makes barely $8 dollars a month. Even if the gateways were free, $8 wouldn’t pay the electricity bill of the computer it runs on (0.4kW x 24hrs x 30 days x $0.12 = $35) You need a large number of them to make your while worth. At which point, you need at least part time human attention to take care of them, so they are healthy and performant. And let’s not forget, performant gateways (needed for cherry picker) are many times more costly than your pocket node. At which point, you are not the little guy anymore.
  • This proposal wouldn’t benefit only the large node runners. In fact the opposite: A single person could be running a very sophisticated (and highly rewarding) chain, and be profitable, if we allow such chains. Imagine it would be possible to make 10000 pokt per day on a single machine, because what it offers is so valuable. New type of more agile node runners could show up, instead of running old-school chains, they disrupt the big guys with innovative chains.
  • This doesn’t further unbalance chains. On average, Eth Archival Trace makes 0.066 POKT per day. That is $0.0495 per month. 5… cents… per… month… before rewards share! Doubling it is not a decision maker for any node runner. You know what is a decision maker these days, when most node runners are barely (if at all) profitable? Cost. Node runners are overprovisioned in some chains because they simply cannot afford even more chains (both in hardware costs and personnel costs) so that they are better balanced. So, I am not proposing this so large node runners can nefariously make a few more cents per month per node.
  • If Gandalf passes, and all the chains are balanced, they will all have the same revenue, so everyone will want to run the cheapest chains. Don’t you want to give any incentive for unlucky ones running some of those more expensive chains for the same balanced overall rewards?

This proposal is about enabling some new scenarios, and to give some new tools to both supply side and demand side. It is not about balancing (although it can help), and it can live in harmony if they both were to pass. So, can we please take Gandalf to its own post?

Regarding the “bureaucracy mayhem”. I don’t see it. A cost factor will be a part of a chain proposal, and the onus will be on the proposers to justify any deviation from today’s defaults. It is not like we have exciting chain event every day anyways. What happens for existing chains can also be a one-time vote.

1 Like

Your first mentioned this proposal being for the little guys. They do exists and there are still some folks that run POKT nodes from their homes, mostly using Node Pilot, though typically well below network average. GANDALF starts making the network conducive to such node setups, and this proposal was presented as a way to help out them out as well… hence why I took time to reason out how that would not be the case.

GANDALF is stalled at the moment because it either takes gateways to change their cherry picker parameters to check how many chains a node is staked with, or it takes network upgrade change where the network checks if the number of chains a node is staked on equals the allowed value before the node is put in a session.

So far PNI and PNF have been absent from the conversation, so there isn’t a clear path forward. Honestly, if you wanted to add MaxChains checking to sessions as part of this proposal, so that MaxChains can be adjusted, then support would make more sense, as it would actually touch on much of POKT economic needs IMO. We would start fixing the actual economic engine of POKT.

That 10,000 POKT going to a single machine is coming out of the 220k being minted per day via ARR. So by having a single machine generating 4.5% of POKT’s total daily reward (which is $2,880 a month), the rewards on all the other RelayChainIDs get dramatically reduced.

Those without the hardware for LLMs (and 14 other chains) see their rewards dramatically reduced, while the LLM providers generate huge portion of POKT total rewards. Example :point_down:

This Proposal + POKT LLM

Lets say this proposal passes, and POKT LLM is launched given a 150x RTTM weight since it’s calls are 150x more resources (15s/.1s).

If this POKT LLM gets a low 2M relays a day, since it has a 150x weight, it actually has the effect of 300M relays being added to the network.

If 300M equivalent relays were added to the network via POKT LLM, then every other RelayChainID’s reward would be reduced by 20%. This is because ARR only allows 220k POKT to be minted per day, so POKT LLM would take 20% of that mint due to it’s weight.

This means that everyone who does not have the infra to support POKT LLM would immediately have their rewards cut by 20% immediately. Only those with the hardware chops to support POKT LLM will get the $40k being produced each month.

This 20% shift of the rewards to LLM providers, would only create $1.7 a day in protocol burn revenue (GatewayFeePerRelay). Is $255 of monthly burn worth shifting $40k of rewards to those with LLM capabilities? I’m not sure… as technically POKT LLM should be generating $7,650 if it’s GatewayFeePerRelay was properly weighted.

To make the whole network benefit from POKT LLM, from my perspective, there would need to be a dynamic GatewayFeePerRelay per RelayChainID to be a part of this proposal. The more folks that use the more expensive POKT LLM, the more POKT is minted, without compromising POKT’s desire to be low inflation and eventually deflationary. I already modeled out an ARR that is not a flat 220k per day, but connected to what is being burned, so technically most of the economic work for this is already done in Burn And 🥩 Harnessing (BASH) Deflation Economic Model.


I can’t say this enough, I support this kind of move. Not entirely sure if it works for v0, but I’m more than willing to consider it… so thank-you for bringing this forward.

To actually make POKT v0 conducive to this kind of fundamental change, then I see this proposal getting us 33% of the way there in it’s current form. To make this move towards serving POKT LLM, in a manner that does NOT further gut rewards for those without LLM hardware, and produces fair protocol revenue (burn), then personally believe this proposal needs to address:

  1. MaxChains fix (balance the network without the need for complex DAO levers)
  2. Dynamic GatewayFeePerRelay per chain (so the protocol can charge LLM RelayChainIDs fairly)
  3. Accompanying replacement to ARR to account for chain specific GatewayFeePerRelay per chain (as mentioned before, I’ve already modeled out an ARR alternative that connects inflation to the burn rate, so I would consider this already done).

To launch this in v1 makes TONs of sense because most of this is already being planned or can be incorporated… but open to v0 if it makes sense (hence why I’m putting time into flushing this out).

Not trying to be adversarial, just being pragmatic with the economic effects of such a move and the wider ramifications :sweat_smile:

Thank-you for all your response :slightly_smiling_face:

What are the hardware requirements for this? What kind of GPU’s would this require? This could be a very cool way to get old miners (like myself) putting GPU’s to use… so more info on all would be required of folks to join POKT LLM would be super helpful with all this decision making.

Big fan of the work overall :slightly_smiling_face:

1 Like

Thanks for taking the time and sharing your thoughts.

  1. MaxChains fix (balance the network without the need for complex DAO levers)

MaxChains is orthogonal to what we discuss. This proposal is not related to Gandalf. If/when Gandalf passes, we’ll be happy to help designing and making the changes. Until then, let’s keep them separate please.

… then every other RelayChainID’s reward would be reduced by 20%
2. Dynamic GatewayFeePerRelay per chain (so the protocol can charge LLM RelayChainIDs fairly)

We want to grow the pie, not to find new ways to share it, and in particular, we have no intention to make anyone’s piece smaller. Therefore, the intention is NOT to get the rewards at the expense of any other current chains. I think #2 is a good idea, and we can certainly do it in the scope of this project. My assumption, perhaps wrongly, was doing it off the chain. But I think you are right, it is indeed better to do it on the chain.

  1. Accompanying replacement to ARR to account for chain specific GatewayFeePerRelay per chain (as mentioned before, I’ve already modeled out an ARR alternative that connects inflation to the burn rate, so I would consider this already done).

ARR limits inflation, not the total volume. So, the good news is that it won’t need to change at all. Any adjustment to RTTM will simply need to specify a convincing GatewayFeePerRelay as well.

1 Like

If the focus for this proposal is focused around trying to support novel RelayChainIDs, like the POKT LLM (which is what this conversation as shifted towards.), then GANDALF wouldn’t as directly apply. Use this proposal to balance rewards across current chains (which keeps being suggested) then we do need to consider the entire reward per chain economic system.


GatewayFeePerRelay is currently off-chain, and was universally set by ARR. It may not need to be on-chain, but it would need to be in ARR (with a method/system to adjust) or in an ARR replacement.

1 Like

I think that if this is implemented and a chain RTTM is changes, then the GFPR for that chain should be changed proportionally. The DAO should not help the gateways operations by charging 1x for a chain that is paid x5. I think that this is not an issue as it can be simple resolved by setting up an increased burn to the increased mint.

Regarding LLMs (or any new chain for what it matters), the process is the same, set-up the chain, vote on the new chain price (if needed), set GFPR according, and that’s it. The only different from current whitelisting of chains, is that a price should be agreed on (if needed).

And this is why this matters. There is A LOT to discuss around this, but without this upgrade we cannot move forward into thinking about this.

I think that we are over-thinking a simple a needed change, other protocols (like LAVA) already have a pre-chain multiplier. It is not a wild idea, the subject of setting different prices per chain have been discussed multiple times. Now that we have a PR that give us this feature we should be focused reviewing that, making sure it is safe and it runs on the test net. All the discussions of probable scenarios that can arise due to having this feature is not pertinent to the feature itself. We can just decide that this PR is applied without changing anything (all RTTMs are the same) and then sit down and discuss.


Thanks for putting up an easy to understand example. Two thoughts:

  1. What about adding an additional constant that addresses under/over provision of chains? So: RTTM * RTTMM (ex. ‘2’ for archival) * CP (Chain Provision, for0028 it would be something like 0.3)

So you could automatically adjust rewards to both fairly compensate more computationally expensive work AND incentivize efficient chain provision?

  1. As a layman - Adding ‘AI’, in any capacity to Pocket has to be a good thing and a major marketing tool. If Pocket can in theory service LLMs and then do a press release that we now service AI, even if the true situation is more complicated, I think this helps with both retail + institutional investor interest and can be considered as a much a marketing expense as a building or tokenomics expense.
1 Like

Governance Transactions Formatting

With the new RTTMmap parameter, it’s not clear to me how transactions would be submitted to update the parameter value.

For example, the way that the SupportedBlockchains parameter is updated is as follows:

pocket gov change_param a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4 mainnet pocketcore/SupportedBlockchains “[“0001”,“0003”,“0004”,“0005”,“0006”,“0009”,“000A”,“000B”,“000C”,“000F”,“0010”,“0012”,“0021”,“0022”,“0024”,“0025”,“0026”,“0027”,“0028”,“0040”,“0044”,“0046”,“0047”,“0048”,“0049”,“0050”,“0051”,“0052”,“0053”,“0054”,“0056”,“0057”,“0058”,“0059”,“0060”,“0061”,“0063”,“0065”,“0066”,“0067”,“0068”,“0069”,“0070”,“0071”,“0072”,“0074”,“0075”,“0076”,“0077”,“0078”,“0079”,“0080”,“0081”,“03DF”]”

The new RTTMmap parameter should work similarly, in that PNF only has to submit one transaction in order to modify an array of values.

Could you indicate how such a governance transaction would be formatted for the RTTMmap?

Has your local devnet testing included submitting governance transactions to update the parameter?

ARR Considerations

The per-chain RelaysToTokensMultiplier (RTTM) significantly complicates emission control policies, like the currently active ARR.

ARR operates on the assumption that there is a single RTTM that generates a given mint for a given relay count. Since we know that the same multiplier was applied to all relays, we can make weekly adjustments that will average out to the target daily minting. See this spreadsheet to see this in action.

Introducing chain-specific multipliers means that we can no longer assume that every relay will be multiplied by the same value. We can’t predict how many relays each chain will process either, since it’s an emergent product of demand. Since each chain may have different multipliers, and we can’t predict relays-per-chain, this seems to break the existing methodology for emission controls.

I’m basing this analysis on this logic outlined in the proposal:

When a node mints relay rewards, it looks up the pos/RelaysToTokensMultiplierMap, and the relayed chain is defined there, it adopts a custom multiplier for that chain to calculate the amount of relay rewards, otherwise it adopts the default multiplier of pos/RelaysToTokensMultiplier.

ARR modifies the RTTM, not the RTTMmap, and since the RTTMmap overrides the RTTM if a value is set, ARR is no longer controlling emissions for those chains.

That said, this proposal does not suggest any parameter values for the new RTTMmap, so it will not break ARR until a PUP is approved to modify the multiplier for a specific chain. We could therefore think of this PIP as a decision to grant ourselves the option to activate this feature, rather than an implicit approval of any RTTMmap values, as msa alludes to:

This PIP can proceed without answering the ARR question if we’re considering it under this framing. However, any subsequent PUP to set a RTTMmap value should be coupled with a plan to adapt or replace ARR.

The proposal authors have suggested this could be addressed by increasing the GatewayFeePerRelay for specific chains in the same proportion. However, it should be noted that GatewayFeePerRelay is not yet offsetting today’s mint rate (i.e. burn ≠ mint), so scaling the burn in proportion is not a solution by itself. This would have to be part of a broader strategy to set mint = burn, which is not viable yet at current daily relay counts.

Implementation Details and Moving to a Vote

Before moving to a vote, the Implementation section should be filled out as follows, to follow the standard for previous protocol upgrades (note: I’m assuming we’ll adopt RC-0.11.0 as the version number for this potential upgrade):

  • If this proposal is approved, and after testing has taken place for BETA-0.11.0 on Testnet, RC-0.11.0 will be published for node runner adoption. Anyone can monitor node runner adoption of RC-0.11.0 using the Servicers and Validators version pie charts displayed here or the equivalent “Staked Tokens by Node Version” chart displayed here.
  • Once ≥67% of validator power has updated to this version, an upgrade height will be selected by the Foundation Directors based on the preferences expressed by node runners (e.g. through Discord polls) and they will pass this height to the chain through the pocket gov upgrade transaction.
  • The upgrade height chosen by the Foundation Directors will be communicated in advance to ensure node runners are prepared to react if necessary. A time will be targeted that maximizes the availability of node runners, accounting for all time zones and working hours.
  • Once the network is upgraded, each of the new features will be enabled by the Foundation submitting the pocket gov enable txs and working with the protocol developers to ensure there are no issues.
  • For avoidance of doubt, this PIP does not approve any values for the new RelaysToTokensMultiplierMap parameter, which means the global RelaysToTokensMultiplier will continue to apply to all chains until a subsequent PUP is approved modifying the new RelaysToTokensMultiplierMap parameter.

Thank you for your questions and comments!

Could you indicate how such a governance transaction would be formatted for the RTTMmap?

Here’s an example command to submit a tx.

$ export POKT=<path to pocket>
$ export DAO=<DAO address>
$ export NETWORK=mainnet
$ $POKT gov change_param $DAO $NETWORK pos/RelaysToTokensMultiplierMap '{"0001":"12345","03DF":"42","0028":"3141592"}' 10000

Once it’s accepted by the network, the new parameter looks like this.

$ $POKT query node-params
2023/09/19 11:30:20 Initializing Pocket Datadir
2023/09/19 11:30:20 datadir = /home/john/.pocket
    "dao_allocation": "10",
    "downtime_jail_duration": "3600000000000",
    "max_evidence_age": "120000000000",
    "max_jailed_blocks": "37960",
    "max_validators": "5",
    "maximum_chains": "20",
    "min_signed_per_window": "0.600000000000000000",
    "proposer_allocation": "1",
    "relays_to_tokens_multiplier": "8461",
    "relays_to_tokens_multiplier_map": {
        "0001": "12345",
        "0028": "3141592",
        "03DF": "42"
    "servicer_stake_floor_multipler": "15000000000",
    "servicer_stake_floor_multiplier_exponent": "1.000000000000000000",
    "servicer_stake_weight_ceiling": "15000000000",
    "servicer_stake_weight_multipler": "1.000000000000000000",
    "session_block_frequency": "4",
    "signed_blocks_window": "10",
    "slash_fraction_double_sign": "0.000001000000000000",
    "slash_fraction_downtime": "0.000001000000000000",
    "stake_denom": "upokt",
    "stake_minimum": "15000000000",
    "unstaking_time": "300000000000"

Has your local devnet testing included submitting governance transactions to update the parameter?

Yes, we conducted E2E testing on our devnet that includes:

  • submit gov transactions
  • keep sending relays throughout a test (before/after gov transactions) and wait until relay rewards are minted
  • verify the amount of rewards

Before moving to a vote, the Implementation section should be filled out as follows, to follow the standard for previous protocol upgrades (note: I’m assuming we’ll adopt RC-0.11.0 as the version number for this potential upgrade):

We’ll update the post.


This was a helpful example, thanks. It seems easy enough to format and submit changes to multiple chains at once.

1 Like

This proposal is now available for voting at Snapshot

1 Like

In the future, I’d like to ask that proposals be labeled in factual terms, instead of subjective framing. The DAO voting page shouldn’t be used to frame the proposal itself IMO.


This current proposal feels like déjà vu – as if we’ve seen this movie before, Stake Weighting 2.0.

I might be a bit late to the party, but I believe this was a golden opportunity to make necessary changes for GANDALF. I fully support the idea of having higher RTTM for different chain IDs and regions, something I’ve always advocated for. However, I can only see two viable ways to implement this while preventing abuse. Definitely not in the current protocol’s form.

  1. Independent staking for each chain ID.
  2. One staked Pocket node for one Chain.

I don’t believe the DAO should vote on a feature without first assessing its potential impact on the network. This statement appears counterintuitive. I see the potential here for a provider, like c0d3r or any other, to run archival nodes and stake only trace/archival nodes to maximize rewards. Indies won’t run archival nodes due to resource constraints.

1 Like

This is a very valid and interesting point.

The biggest challenge against moving to one staked POKT node for each chain pre the Shannon upgrade is the high coordination costs involved. It would be great to get more input from the various node runner groups about how willing they would be to implement such a change.


I will note that the migration challenge and high coordination costs will exists regardless of if happens pre-Shannon or at the Shannon upgrade. Waiting for Shannon is putting more coordination cost on the upgrade.

The benefits of doing it pre-Shannon, which was laid out in GANDALF, is it can be done in a progressive way, dropping the MaxChains in steps (say start with 5, then 3, then 1). If you wait till Shannon, then it will be an immediate transition (from 15 to 1)… on top of all the other transitions that Shannon is already bringing.

I also believe that balancing the network now has it’s merits, which I laid in GANDALF, but won’t get into on this thread. Just wanted to point out pre-Shannon allows progressive changes, possibly reducing coordination cost, instead of stacking all network changes on Shannon.


I would like to remind everyone that solving any of the problems of original stake weighting or supposed network imbalance problems or any of what Gandalf wants to achieve are NOT goals of this proposal. We did mention briefly that there might be benefits for Gandalf, which I believe is true, but honestly, we regret even mentioning it, because it seems like it derailed the conversation.

The goals of this proposal are:

  • For RTTM: The goal is enabling equal pay for equal work. And the main motivator is enabling scenarios that won’t be otherwise available until v1/Shannon ships. Biggest example is allowing LLM (see here for an example that is currently running end to end on Pocket Testnet) and other generative AI end points to be served. This will help us get to ready for what the future can offer.
    • Ask yourself if you want to enable Pocket Network for offering anything more complex (e.g. LLMs) than what it can do now. Ask yourself if you want to have a chance to participate in AI ecosystem.
    • This proposal brings new capabilities but doesn’t change anything by itself. Any further parameter changes will require DAO action. So, if you like the direction but worried about the exact implementation, don’t.
  • For Built-in Rewards Share: Enabling a wider set of non-custodial node running scenarios.
    • Ask yourself if you want to make the network more secure (by allowing more non-custodial nodes), more transparent (rewards rates being in the chain) and more efficient (no more reward sweep needed).

The two parameter changes have vastly different implications should be separated, not lumped into one vote, full stop.