PIP-21: Optimize Rewards per Relay

About the Author

Hi I’m Mangu, 20 years making software specially in financial and betting networks. Embedded systems, OLTP, entrepreneur and jack-of-all-trades.


This proposal aims to fix the reward generated per relay to be fair on cases when certain RPC invokations are way more expensive than others.


Taking Ethereum as an example, benchmark have shown that a getBalance RPC invocation is twice as expensive than a getTransactionCount invokation. yet we unfairlyreward the nodes 1 relay in each case.


Plenty of investors have trusted us by buying our tokens helping us kickstart the network, we have reached an awesome amount of daily traffic and the network is growing toward maturity really quick. soon we will bloom into the last stage of deployment when burning POKT from applications will offset the POKT created serving relays and a self regulated equilibrium will lead us to a huge competitive advantage against centralized competitors.

In that sense is important to fine tweak our incentives to evenly distribute rewards to ensure all the chains and relays are served.

I have come to realize running some chains is way more expensive than others (e.g. Ethereum Archival) and also some RPC invocations are themselves different in their costs (specially, IO and CPU)

We should constantly measure the cost in terms of storage, cpu and io of hosting and keeping updated the chain and also of their p95 average cost of running every kind of RPC, with that we should have a designated multiplier to award more POKT to expensive-to-serve relays

Operationalizing the proposed solution

First of all, I think we should streamline and share different docker images to run the pockt node with different variations of chains. Ideally reaching a level of friendliness like installing postgres.app in osx (a cool wrapper to run postgresql, leave this post and try it if you can, easy to install/run/unistall), what we would get is (taking advantage of the trend of collaborative stakinng to run nodes) lots of common users downloading and installing our packaged application to synchronize the most profitable networks at the moment for the resources of the computer/server the user has.

we need to setup an standarized benchmark on our baremetal hardware to determine the cost of hosting and keeping updated every chain we support (cpu usage, disk usage, GPU usage, ram usage). also we need something like a TPC-H Homepage to determine RPC transactions per second for every kind of intruction alone and in a mixed workload.

automatically running those results daily and slowly updating an structure on-chain containing rows like “[2 relays] per every [getBalance] RPC invocation in chain [Ethereum Archival]”

I will propose we implement it like:

  1. Create a repeatable/reproducible docker image to run and sync every chain as well as the benchmarking tool and plan for each of them then produce the current benchmark report

  2. Create the infrastucture as code necesary to run thta in the foundation

  3. Extend the cosmos state to include our new structure

  4. Update those parameters via a PUP every 6 months

I think I can make it work with 1 pokt-core dev, 1 devops and 2 go devs, 2 months and probably 75,000 POKT to pay the team

Open questions

  1. we are decentralized, so maybe getting the costs with some instrumentation on the validators and nodes may be a way to do it?

  2. running the benchmark monthly an updating those ratios could be good enough to start, what do you think?

  3. keeping an economic model using real distribution of the rpc-types-of-instruction per chain and the cost of running it, similar to whattomine.com will bring lots of new nodes and help us prepare our economics for maturity, right?


  1. Take advantage of our network first-to-market position and fine tune our economics to increase our moat
  2. Make profitable to serve most of the chains
  3. Streamline the deployment of nodes, reducing greatly the barrier to serve the network
  4. get to a point where anybody can download pokt.app just as easy as it was downloading utorrent and serving torrents in the past, to ensure our infrastructure cost gets as low as it could

Thanks for sparking the conversation on this one. I love the idea, and it makes sense to me economically to reward based off the amount of work your relay chain has to do. It reminds me of Alchemy’s pricing plans.

Though, this definitely seems like a lot of work, and from what i know so far, the Core team is focused on V1.

1 Like

Thanks man, trying to make the project self-sustainable in this recession

1 Like

Can you dm me on telegram pls