Increase MaximumChains to expand node servicing capabilities

Attributes

  • Author(s): @beezy
  • Parameter: Maximum Chains
  • Current Value: 15
  • New Value: 30

Summary

Currently , the maximum amount of chains a node is allowed to stake for servicing is limited to 15. This causes the operator to have to pick and choose the chains they will support usually based upon the ones that have been getting the most relays.

In order to help facilitate the teams goal of 100 chains by the end of 2023, each servicer will need to be able to support new chains as they are onboarded. I propose increasing the MaximumChains parameter to 30 (arbitrary wanted to open to discussion on the optimal setting for this to ensure steady growth and service availability to new chain partnerships). Increasing this will help keep idle nodes busy, move towards the goal of onboarding more chains and allow increased flexibility to node runners.

Abstract

The MaximumChains parameter is pretty straightforward and is defined as The amount of chains a node can be configured for in one stake.

Motivation

While adding backends supporting Moonriver and Moonbeam earlier this week; I also added BSC backends for our nodes. Today when trying to stake BSC; I received the error “Too Many Chains” thus having to remove Goerli from our stake list and adding BSC.

With rewards and relays decreasing due to market conditions, WAGMI and other factors I chose to stake BSC in place of Goerli due to the ever increasing relay count for the chain in comparison to Goerli. Also because I had to in order to support more chains.

Rationale

I chose double the default value as a starting point and to open discussion on what this should be configured to. I think 30 will help give people more options of investigating, deploying and staking new chains on the network for a few months and it can be revisited as Triforce and general demand continue to grow. This proposal will benefit more engaged runners that are looking to diversify their servicing portfolio and aligns the incentive more towards providing diverse relay infrastructure rather than stacking pocket nodes.

There are currently 5941 staked nodes servicing less than 10 chains, 8949 servicing less than 13 chains and 22448 currently at 15 MaximumChains staked.

Dissenting Opinions

1.) We are trying to reduce overall infrastructure costs for node runners now and this is the opposite of our near term goals.

  • Yes. This would increase costs of node runners as they scale into new chains and technologies. It will also give them they ability to directly weigh the cost vs benefit of supporting different chains., Thus replacing other chains they no longer wish to support on existing hardware. While short term it will increase node runners cost(voluntarily by adding more chains) in the long term it aligns with the teams goals.

2.) This proposal exaggerates the economies of scale that larger node providers enjoy.

  • While this could increase cost of larger node providers; it would be their own decision whether it was economically viable to expand their current relay offerings. This could also allow smaller runners a better chance to pick up more relays to stay competitive and diversify. Providers and runners do not have to stake more than 15 chains, this would just give them the option to.

Analyst(s)

N / A

Copyright

Copyright and related rights waived via CC0.

3 Likes

I support this proposal

1 Like

I’d love to hear @luyzdeleon 's opinion on the additional load per servicer, but I support this proposal in theory.

2 Likes

I support this proposal

1 Like

I support this proposal. Based on the research I’ve done, it seems like 15 is a soft limit rather than a hard one. I also would like to add that if the goal is to have 100 whitelisted chains by year-end and to reduce node counts by implementing either
PUP-17 or PUP-15; one can argue this proposal is essential.

3 Likes

I support this proposal 100%.

2 Likes

I know this idea is very popular among node runners with 100+ nodes and - if put to a vote - will likely pass for exactly that reason. Nonetheless, I’ll take this opportunity to document my opposition and rational.

The example which the OP cites is exactly how the system is supposed to work.

Large providers (like myself) can pay the cost of a single backend server and throw thousands of front end nodes onto these small chains. The cost per node is minimal. Smaller runners who still have plenty of available space in their allocation would be able to make a profit off of those chains if the big boys didn’t crush the returns.

2 Likes

I support this proposal

Thank you putting the time of putting together this proposal.

I would like to start with the fact that my comments are neither in favor or against this proposal, they are merely a recollection of information from my understanding of the current implementation of the network and how if this proposal were to pass, it would impact node runners in their operations. I also would like to clarify that I analyzed this proposal from the standpoint that only the Nodes maximum chains would be increased, not the application’s.

The first observation is that this proposal would increase 28.55133 mb every block to the node’s application state, which sums up to 2.4gb of application state every day if every node in the network (37,322 according to POKTscan) decides to stake for 30 chains at it’s maximum. This would accelerate state growth significantly, which increases costs for node runners. One caveat to this observation would be our upcoming release of Persistence Replacement which would alleviate some of this burden, but we would have to measure again to see the impact.

In addition to that Pocket Core buckets nodes per chain in order to speed up access during session generation, if we were to double the amount of chains, we would be doubling these buckets in memory which would increase the software memory consumption, as well as processing times for session generation, which affects quality of service for apps and block processing for both validator and service nodes.

The team is trying to create mechanisms to educate the community on the broader impact of parameter changes and create tools to gather such information beforehand, I would like to open up the possibility of starting a dialogue in our #core-research channel in Discord to further discuss the impact of this change before making a final decision on this proposal.

1 Like