Pre-Proposal: Pokt Info Reimbursement


  • Author(s): Thunderhead
  • Recipient(s): Thunderhead
  • Category: Reimbursement
  • Asking Amount: The POKT equivalent to USD $75,000 to cover development costs and initial maintenance.

Summary (PI) is a comprehensive analytics tool for node runners in the pocket network ecosystem. It allows node-runners to take advantage of granular, real-time data to improve the overall QoS of Pocket Network. (PI) gives all node runners, small and large, the ability to optimize and maximize their individual nodes, in turn benefiting the network.

At Thunderhead we believe that providing intricate, accurate data, software tooling and infrastructure to the community is essential for advancing Pocket’s growth and promoting collaboration. By sharing resources and information, community members can build upon each other’s work and create new tools for the greater good of the network.

In short, P.I enables nodes that make up the network to be:

  • More competitive
  • Able to deliver much better QoS to the network as a whole.
  • More efficient as node operators can easily find improvements.

At its core, P.I encourages the network to optimize itself - as the transparency granular data brings is a huge boon to node runners who want to figure out how they can be more efficient.

The P.I dashboard is for users and node runners to find granular data to optimize their own activities, as well as measure how they are doing compared to the network as a whole. For example, users can see where specific errors are coming from within specific cache sets to test if competitors are seeing the same errors.

These allow for an enormously powerful representation of data not found anywhere else, so by making this resource and the data within widely available, community members and researchers can work together to solve some of the complex problems POKT will face in the future.


PI improves the quality of service of the POKT network overall while reducing monopoly on more detailed, granular data like errors, allowing new entrants to the network to be more efficient and faster. PI also illuminates the inputs to the opaque cherry picker, which has historically confused the network’s node runners.

Additionally, PNI were paying ~$50k per month for Datadog to visualize a fraction of PI’s current features. This is $600k per year. PoktInfo is an open-source, community-ran alternative to this at a small fraction of the cost.

As far as users go, the site has 45 monthly active users, demonstrating that this is a popular tool amongst the small node runner population.


Many in the community have found it useful as well:
Dave from Knowable VC said was massively helpful for us when standing up node… good for both pre/during node setup and then afterwards long term to try tweak node performance.’

Additional snippets of community members discussing in public channels are included below.

Please see our supplementary PNF impact scorecard as well:

Our cost incurred exceeds how much we are asking for (a year of work for one experienced SWE + not inexpensive infrastructure requirements). We are asking for 60% of our actual costs, as we do believe in the long-term value of the project.

Intended Outcomes

The streamlining of node troubleshooting, an increase in clarity around network data, and the improvement in the overall QoS of the Pocket Network.

Since release, the team has provided support, corrected bugs, and added new features to the product. Thunderhead intends to continue to enhance and support the code beyond this proposal.


We delivered an open-source, comprehensive, and hosted dashboard to the node-running community.


  1. Idea, R&D, development & testing: (August 2022 - Present)
  2. Beta release (November 28th, 2022)
  3. Feature overhaul (March 17th, 2023)
  4. Community feedback, collaboration, & bug fixes: (November 28th, 2022 - to date).


  1. Comprehensive dashboard with a page for reward, latency, relays, errors, and location information, each filterable by every domain, chain, region. (See Appendix for full feature list)
  2. All services open sourced with documentation detailing setup/install, architecture, and flow
    1. Common
    2. Reward/Nodes/Location info
    3. Errors/Latency cache
    4. Main
  3. Maintained public hosted instance of and answered all relevant questions/feedback.

Community mentions:

See how various community members have used in the discord channels below.

Al uses Pokt Info to analyze unusual Arbitrum Traffic

Ming recommends user discussing Celo latency to check it on

Ian observes that it is often difficult to understand why the cherrypicker behaves how it does, and how he uses to help make this more transparent, as DataDog is no longer around.

Shane uses to easily observe and diagnose network-wide issues

Appendix: Full list of features

  1. Comprehensive dashboard with 6 views, each filterable by every domain, chain, region,
    1. Main Page
      1. Total relays past 24 hours, Average latency past 24 hours, Success rate last 24 hours, sum of total relays by region past 24 hours, average latency by region past 24 hours, sum of total relays over time per region, average latency over time by region, sum of total relays per chain last 24 hours, sucess rate per chain last 24 hours, sum of total relays over time per chain last 24 hours, average latency per chain last 24 hours, average latency over time
    2. Rewards
      1. Average relays per node and per chain, Total relays per node and per chain, Average relays per node per chain by provider
    3. Location
      1. Count per ISP, Count per continent, Count per country, Count per city, Count by IP, Geolocation map, Node count over time.
      2. Additionally filterable by ISP/continent.
      3. Geomesh compatible
    4. Latency
      1. Average latency per chain, Sum of total relays per chain, Average latency per region, Sum of total relays per region, Sum of total relays over time per region, Average relays per node over time per region, Average latency by region over time, Average latency by chain/region heatmap, Total relays by chain/region heatmap, Average relays by chain/region heatmap
    5. Errors charts
      1. Out of sync error %, error msg sum, out of sync error avg per node, Error msg average per node, out of sync error sum, error msg % overtime, out of sync % by chain bar chart.
    6. Errors heatmaps
      1. Error msg % by domain, Error msg average per node by domain, Error msg % by chain, Error msg average per node by chain.

I’m glad that this tool was around in March '23 when we started our own Pocket node. Some community members in Discord turned us onto PoktInfo and I basically used it every day to solidify our business case, and find out how and where we should plan to run our nodes.

The rewards and relays tabs allowed us to find out about how much we could expect to earn per relays, fact check those assumptions AND pass that info onto other stakeholders to get approval to go ahead with node development. Way better approach than just trial and error.

And now that we know how to use PoktInfo it’s easy to tweak our nodes and try to stay among the most performant providers out there.

Addison and the other thunderhead members were also quick and helpful in their disc and the Pokt discord. So when he asked me to share my experience, I was glad to do it. This proposal sounds like a reasonable ask for a solid tool. Good luck :v:


Hey guys, as always thanks for the quality of your work. Thanks also for completing the impact scorecard which gives everyone an opportunity to discuss the impact behind the proposal without it feeling personal.

When it comes to impact, PNF would like to call attention to some areas where our view diverges from yours.

Utilization: We believe the score of 7 is too high when considering the actual use of across the community. 7 would be a very high score and more akin to universal use. Can you provide more info on the full history of Monthly and Weekly Active Users for and not just the most recent month? As we understand from the info listed in your proposal, it’s conceivable that with the 3 members of the TH team using the product regularly this equates to 25% of the 12 Weekly Active Users shown. And just on a quick review of web/seo traffic it seems quite small when compared to something like poktscan which had over 47,000 views last month:
Rather than us speculate too much we think it would be good to hear from other node runners regarding their own utilisation of the product and the value it creates for them (and noting that we are grateful to see some testimonials in the proposal already, but at the requested value we’d hope to see strong and wide support).

Ecosystem significance: The score of 7 is also too high. We do not believe this is a tool that many in the ecosystem have been acutely aware of and it is not clear what impact there would be if it was gone tomorrow. It may be a “nice to have” tool for node runners to optimise their setups but to say it is a “game changer” feels like a stretch.

Novelty/Innovation factor: This score of 7 is much, much higher than we would score. There are references to datadog and other tools like Poktscan already providing some similar functionality and we understand others in the community are currently considering creating similar tools. A novelty and innovation factor would be more akin to creating a new product rather than replicating and improving on the functionality of another. It would be a bad precedent to pay $75k for this tool when it is conceivable that someone else could replace or compete with it for a similar value or less. It’s great you saw an opportunity to create value for the network but we don’t believe it follows that the DAO should necessarily pay for this tool and certainly not the amount requested.

In a more general sense to compare this request to others, Poktscan received an initial $70K grant to create their tool and subsequently has expanded and operated that product without any further funding from the DAO. Given the comparison above in terms of traffic and usage we would argue that any grant to Thunderhead for should either be much smaller or come much later when it can be seen to be an essential tool in the ecosystem.

And to provide a further reference point for development work and impact, wPOKT will cost approximately ~$50k to develop and has far greater consequence for the ecosystem given the access it provides to liquidity and tooling across the entire EVM ecosystem.

Taking into account all of the above, PNF would adjust the impact score to 19/50. This is around the midpoint in the stated $10-50K range shown in the scorecard, and so a reimbursement in the ballpark of $25K would make more sense to us. Given the focus on delivering v1 and aligning most of our resources behind this, we would not support this proposal at the requested value given the product is more of a “nice to have”


I am glad TH came up with such a tool for a deeper analysis of chains usage / relays. It has helped SendNodes at several occasions and is a great addition to PoktScan in the Pocket Network ecosystem.

I also believe it’s important such deliveries gets financially supported by Pocket Network, especially while the token price is so low and retaining talents is key to survival.


I think you’ve made some good points b3n, and as someone fairly new to the pokt community i was glad to see the comparison to other tools like poktscan and how much funding they have received for their work. The only thing I would say in how it relates to the proposal from TH is that if you want good tools to keep being made for your ecosystem, sometimes you are paying developers for the attempt, and not just the success cases. That seems to me to lead to an environment where more people are trying to make better products for pokt.

And because you have described things so well, I would tend to agree with your adjusted rankings, except in the case of the novelty/innovation score. Novelty isn’t building a moat around a tech once it’s made, it’s bringing something new to the community that didn’t exist before. I was scrounging through poktscan trying to get the right data to make my calcs that i needed, and it wasn’t until I was able to compare poktscan data to other info on poktinfo for relays, loations, etc that I could confidently say my numbers made sense.


Thanks Ben for your response and feedback!

Here is the user chart since the original beta release. We can see there were periods when utilization was actually much higher.

This is not an apples to apples comparison. For example, exchanges and wallets link to poktscan for txhash information. Thus anytime someone sends POKT, they are likely using poktscan to track that information. Poktscan has a much wider audience that encompasses retail, investors, and node runners alike.

PoktInfo’s target demographic is the node runner, of which there are 20-30 with more than 500k staked (unclear how many are actually active). From this usage data we can see that we are capturing a significant amount of the node runner community.

PoktInfo is also not a daily tool. Users will use it when setting up their infrastructure or to quickly debug a problem when it arises. PoktInfo helps node operators promptly understand change when it occurs. It does not occur on a daily basis, maybe weekly or bi-weekly, but when it does it is quite strenous to deal with.

The alternative for this type of information is PNI’s grafana, which only shows the past 5 minutes, and in a highly difficult-to-interpret manner. There is not another tool that visualizes our level of data. For example, a few weeks ago there was a portal bug causing Avax traffic to be routed to ~4 operators - PoktInfo was the tool used to observe and debug this issue, as another solution doesn’t exist. I believe there may be a difference of perspective here, as nobody at PNF has run nodes at scale before.

The predecessor to this tool was Datadog, which cost PNI $600k a year. This tool is already a massive savings, and has significantly more functionality (errors, rewards, location, etc). As with every innovation, it becomes significantly easier to replicate once it is already completed (Geomesh, LeanPocket…). When we started this tool and released the first version in November, nobody in the ecosystem had access to this data in any capacity.

I do recognize this, but I am not sure if it is possible to compare with something that is not ultimately sustainable. Additionally, Poktscan is closed source and they have toyed with introducing a subscription model in the past.

We appreciate your input and candid opinions. I see your point of view and will adjust the score card when going to proposal and do our best to find a happy medium.

Thank you Don and Dave for the feedback!


We have turned this into PEP-60!

Please see here: PEP-60: Pokt Info Partial Reimbursement Request

1 Like