PEP-61: Pokt Info Partial Reimbursement Request


  • Author(s): Thunderhead
  • Recipient(s): Thunderhead
  • Category: Reimbursement
  • Related: Pre-Proposal: Pokt Info Reimbursement
  • Asking Amount: The POKT equivalent to USD $50,000 (based on 24 hour VWAP at time of transfer) to cover a portion of development and maintenance costs.

Summary (P.I) is a comprehensive analytics tool for node runners in the pocket network ecosystem.

It allows node-runners to take advantage of granular, real-time data to improve the overall QoS of Pocket Network. P.I gives all node runners, small and large, the ability to optimize the performance and rewards of their individual nodes, in turn benefiting the network.

At Thunderhead we believe that providing intricate, accurate data, software tooling and infrastructure to the community is essential to advance Pocket’s growth. We believe promoting collaboration, by sharing resources and information, will allow the community to build upon each other’s work and create new tools for the greater good.

In short, P.I enables nodes that make up the network to be:

  • More competitive
  • Able to deliver much better QoS to the network as a whole.
  • More efficient as node operators can easily find improvements.

At its core, P.I encourages the network to optimize itself - as the transparency granular data brings is a huge boon to node runners who want to figure out how they can be more efficient as well as measure how they are doing compared to the network as a whole.

For example, users can see where specific errors are coming from within specific cache sets to test if competitors are seeing the same errors.

This representation of data is not found anywhere else, by making this resource and the data within widely available, community members and researchers can work together to solve some of the complex problems POKT will face in the future.

As well as this, P.I improves the quality of service of the POKT network overall by reducing monopoly on more detailed, granular data - like errors. This allows new entrants to the network to be more efficient, as well as illuminate for them the inputs to the opaque cherry picker, which has historically confused the network’s node runners.

As far as users go, the site has 45 monthly active users, with spikes to ~100, demonstrating that this is a popular tool amongst the small node runner population and set to continue to be so.


PNI were previously paying $600,000 per year to Datadog in order to visualize a fraction of P.I’s current features. Pokt.Info is an open-source, community-run alternative to this at a tiny fraction of the cost.

As mentioned, the community confusion on the cherry picker was also a huge incentive, and P.I makes this highly transparent with a thorough data set.

Before P.I, nobody had any insight into errors or latency metrics that were behind how many relays or rewards they were receiving, other than on Poktscan’s geo page. However, the page is not as comprehensive (ex: it does not show changes over time) and it did not exist until several months after P.I’s initial release.

The extreme lack of data when operating nodes on the network, combined with the recognition of the limited resources available to core development team around accoutrements that were not protocol specific, motivated TH to build P.I.

Examples of datasets available with P.I

Unlike other analytic tools, the cache set logic P.I employs allows users to select only specific nodes they want to analyze across a data set.

With P.I, you could for example select only the data for say, five specific competing node providers, and only in, perhaps, Singapore and Germany, helping users select highly specific parameters on how and where the network is functioning.

One can also change how the data is presented using a parameter such as block intervals. In this case, a block interval of 4 (or around one hour using 15 min block times) will give your graph lots of data points – good for drilling down use case specifics over smaller time periods - but with spikes, this data may be too noisy for longer term trends.

If one were to select an interval period of 96 (or around 24 hours) they would have a much smoother representation of the data what would be easier to see such trends. P.I showcases these intervals, but also other more granular parameters like:

  • Overall cache set: for e.g.; Pokt Network, C0d3r, Blockspaces, Thunderhead.
  • Chain specification: As many or as little as the user wants to display results for.
  • Regions: For e.g., just central Europe, just east Asia, or perhaps just both of these.
  • Block Height: For e.g., just display results ‘from block 87,234’.

Used in combination with P.I’s built in graphs, some of which showcase;

  • Relay Count
  • Latency
  • Success Rate
  • Rewards
  • Locations
  • Errors

These allow for a very powerful representation of data. (See appendix for full feature list)

Pokt.Info Background

The motivation for, as mentioned, came from the inability for both existing and new Pocket Network node operators to get granular data on node activity within the network as a whole.

The team are very pleased with the features now available within the dashboard, as In short, P.I currently allows pocket network participants to currently get the most detailed information possible on node running activity globally.

TH have worked relentlessly to make the dashboard come to life over the past year. Without the attention of Thunderhead, many node runners would have far less information, and therefore be far less efficient, in running good quality of service pocket nodes.

Solution & Intended Outcomes

Pokt.Info will help with the future streamlining of node troubleshooting, provide an increase in clarity around network data, and finally improve the overall QoS of the Pocket Network.

Since release, the team has provided support, corrected bugs, and added new features to the product. Thunderhead intends to continue to enhance and support the code beyond this proposal by adding new enhancements, features, and fixes as they arise.

We also considered it vital to open source this ecosystem project to avoid privatization for the benefit of a few, which would harm the network overall. Such contributions, we believe, help pave the way to Pocket becoming a more open space for future contributors.

Ultimately, has not only made the network more accessible for those taking part in it, it has increased the value offering of the protocol itself for node runners, investors and PNI, as the QoS of pocket network continues to improve.

Publicly available granular data is essential for Pocket to become a top RPC provider.



Thunderhead has been a contributor to the Pocket Ecosystem since early 2021. Since then, we’ve build out staking infrastructure and open source public goods. We created Thunderstake to provide a white glove offering with the necessities of the modern digital asset fund and ThunderPOKT to increase the inclusivity of the ecosystem by allowing any POKT holder to stake and earn protocol rewards.

As far as open-source contributions are concerned, in tandem with PoktFund, we designed and implemented LeanPocket, a solution that reduced the infrastructure costs of the network by nearly 99%. We also built pokt.wtach, a block explorer, RPCmeter, a RPC latency analytics tool, and now Pokt.Info, an all in one dashboard for monitoring the quality of service of one’s infrastructure.

We gave a talk at InfraCon as well


We delivered an open-source, comprehensive, and hosted dashboard to the node-running community.


  1. Idea, R&D, development & testing: (August 2022 - Present)
  2. Beta release (November 28th, 2022)
  3. Feature overhaul (March 17th, 2023)
  4. Community feedback, collaboration, & bug fixes: (November 28th, 2022 - to date).


  1. Comprehensive dashboard with a page for reward, latency, relays, errors, and location information, each filterable by every domain, chain, region. (See Appendix for full feature list)
  2. All services open sourced with documentation detailing setup/install, architecture, and flow
  3. Common 1
  4. Reward/Nodes/Location info 1
  5. Errors/Latency cache
  6. Main
  7. Maintained public hosted instance of info and answered all relevant questions/feedback


The funds sought by this proposal will reimburse Thunderhead for this work and impact on providing community driven products to enhance the Pocket Ecosystem.

Our cost incurred exceeds how much we are asking for. We have the cost of a year of work for one experienced SWE, and expensive infrastructure requirements adding up to the region of $120k. We are now asking for less than 50% of our actual costs, as we believe in the long-term value of the project.

We would have liked to at least cover costs, but in light of the current market TH would accept POKT at $50k USD at time of reimbursement.

Community Feedback

See how various community members have used in the discord channels below.

Al uses Pokt Info to analyze unusual Arbitrum Traffic


Ming recommends user discussing Celo latency to check it on

Ian observes that it is often difficult to understand why the cherrypicker behaves how it does, and how he uses to help make this more transparent, as DataDog is no longer around.

Shane uses to easily observe and diagnose network-wide issues

Dave from Knowable VC used P.I every day when standing up new nodes

Don from sendnodes has been helped by P.I on several occasions in node running endeavours.

We would also like to thank the general community for the unwavering support.

Please see our supplementary PNF impact scorecard as well:

We have taken into account some feedback from pre-proposal, and have revised some scores. However, we have not taken the score down to the level that b3n from PNF requested, as we believe that this is a node runner tool and should not be compared to contributions aimed at wider retail.

We believe that consideration for this project should be based on P.I being tool is for those running running the protocol. As a secondary effect, this benefits everyone from retail, investors and even customers of network gateways. An avid user base of 50 node runners may have huge beneficial knock on effects at scale down the complex chain of network users, gateway providers and investors.

Dissenting Opinions

Upon Pre-Proposal, we were surprised to find some pushback to our initial proposal from PNF, and made a point to clear up some of the concerns. We are happy to answer any feedback.

Appendix; Full list of features:

  1. Comprehensive dashboard with 6 views, each filterable by every domain, chain, region,
  2. Main Page
    1. Total relays past 24 hours, Average latency past 24 hours, Success rate last 24 hours, sum of total relays by region past 24 hours, average latency by region past 24 hours, sum of total relays over time per region, average latency over time by region, sum of total relays per chain last 24 hours, sucess rate per chain last 24 hours, sum of total relays over time per chain last 24 hours, average latency per chain last 24 hours, average latency over time
  3. Rewards
    1. Average relays per node and per chain, Total relays per node and per chain, Average relays per node per chain by provider
  4. Location
    1. Count per ISP, Count per continent, Count per country, Count per city, Count by IP, Geolocation map, Node count over time.
    2. Additionally filterable by ISP/continent.
    3. Geomesh compatible
  5. Latency
    1. Average latency per chain, Sum of total relays per chain, Average latency per region, Sum of total relays per region, Sum of total relays over time per region, Average relays per node over time per region, Average latency by region over time, Average latency by chain/region heatmap, Total relays by chain/region heatmap, Average relays by chain/region heatmap
  6. Errors charts
    1. Out of sync error %, error msg sum, out of sync error avg per node, Error msg average per node, out of sync error sum, error msg % overtime, out of sync % by chain bar chart.
  7. Errors heatmaps
    1. Error msg % by domain, Error msg average per node by domain, Error msg % by chain, Error msg average per node by chain.

This post was uncategorized and duplicated the PEP code of PEP60: Enabling Responsible Allocation of Budget (ERA Budget). I’ve corrected both.

Good luck!

1 Like

What does partial mean here? Is someone else subsidizing the rest, i.e pnf? Or your team? Or a second ask in the future?

1 Like

Partial means the request is not for all of the costs incurred - there will not be a future ask. Between infra and labor, Thunderhead’s cost has been ~$120k. We would have liked to at least cover the costs but we understand the state of the market at the moment.

Thanks Jack. Did not realize :slight_smile:


Since discussion has halted both here and on the proposal for several days, I have pushed the proposal to vote!


Thanks, Addison

I have nothing but praise for all of the work that Thunderhead has done for the community, but speaking from a personal perspective (as we try never to have one “official” view on anything at PNF), I think that this ask is too much given the daily/monthly usage, as well as the fact that I understand that this site won’t work post v1.

Using the impact scorecard template (screenshot below)

I don’t see how this tool could rate higher than 17, mainly because of how infrequent the use is and also because I haven’t seen any evidence of this tool providing a deep and long-term measurable benefit to the ecosystem beyond being a nice-to-have.

Consequently, I would happily support an ask of $15-25k, but $50k seems way too high for this kind of tool. Other similar ecosystems typically fund grants like this for a maximum of $25k, and usually much closer to $5-10k.


Hi Dermott and DAO Voters,

Thank you for your kind words and continued support both privately and publicly. We do appreciate it and it doesn’t go ignored.

We currently spend $1200 just on our infrastructure to support this dashboard. At 25k, this would simply be covering infra for the next 12 months plus some additional T&E. Unfortunately, those economics do not work for us.

I also would like to remind everyone. PNI was spending $50,000 a month for a fraction of this information on Datadog. Feel free to ping PNI for confirmation. Our goal of building this was to extend PNI’s runway. More than a year ago, we made this effort, not to spend a year of life building this, testing it and working with data that was inconsistently provided to us but to save PNI from spending $50,000 a month on a tool that we felt could be done for a fraction of the cost. While we went over budget, the end-result is still very much the same. Datadog was $50,000 a month for the exact same audience that is.

This opportunity to save PNI a million dollars over two years was the goal and frankly, we achieved it. Of course, this is the world we live in and going for retroactive funding is always difficult and never a sure thing. We did our best, our heart was in the right place and if you all do not see it that way, this is perfectly fine. The Thunderhead Team has always opted for doing the right thing and trying our best and this is no different.

I understand we will not change your mind, or the minds of others, but just wanted to put this out there.

Thanks again for the continued support and advice.

Up and onward.


Hey folks,

Confirming some of @Sevi’s points here. We did spend a ton of money on DataDog (upwards of $100k+ near the end). We killed that off when we had our large reset back in the beginning of Q1. has more info then we have ever offered, and just yesterday, I asked them to see if they can begin exploring the idea of surfacing common method calls and blockchain client diversity. This, in my mind, will provide node runners information on how are network is being used. We plan on tweaking the new Portal v2 Quality-of-Service modules to be better at discovering this information and filtering nodes for it for better QoS of our clients. We will also, eventually, provide this data to our community. would be a great consumer of data from the gateway, but in the mean time, they can probe for some of it themselves, which is why I mentioned it to them.

While some of the information they surface has overlap with POKTscan, I do believe having a diversity contributors for all ecosystem projects, especially at the current stage of the network, is a net value add.

I personally find this to be an invaluable resource and have voted in favor of it. These are the types of tools the DAO should be voting for and encouraging the development of.


Just going to point out that I find it pretty weird that I had asked openly for access to the gateway data that allowed TH to build this dashboard, data PNI has gatekept and prevented others from trying to build something with, but now PNI employees voting in favor of this under the guise of improving diversity of contributors. Lol.


Hi @ArtSabintsev

Thanks for your input. However, to clarify on my post above, one of the main reasons that I gave a score of 17 on the impact scorecard was because I was told directly by the infrastructure team at PNI that they do not use and that it wasn’t a replacement for data dog. I was told that replaced a dated version of a dashboard PNI had provided as a public good, but it did not save them any direct amount of $$$ This doesn’t mean to say that isn’t without value, as it’s clear that you and others in the community see value in it, my push back is with regards to the extent of this value.

In any case, your input is valuable, as this is a fully subjective exercise! And we want as many people to provide their take as possible so that the community has sufficient information to make a well-informed decision. Consequently, I would love it if you could please fill in the impact scorecard, as it makes it much easier for us all to talk directly about the measurable impact and how our views differ from each other. And it’s a good habit to instil for future proposals too.

Thanks again to @addison for filling out the impact scorecard as part of their proposal


Very much AGAINST this proposal. This site is barely used and the amount asked is much higher than realistic.
Additionally, as @iannn stated, others had no access to data to build something similar.

This should be downvoted much heavier than it is currently.

1 Like


Following up here.

So, personally, as an individual, I find this tool to be useful. First, it satisfies one of the tenants that I hold dear, which is having diversity and redundancy amongst tools. While I love POKTscan, and hold that entire team in high regard, if it goes down (for any reason), it would cause many issues for the ecosystem. Having a back up is good practice. I feel the same way about poktscan’s explorer feature and co-existing.

Second, I, as someone who personally participates in node-running, and as someone who maintains the node-running side of the business on behalf of PNI, find it useful to be able to compare node runners against each other using’s dashboard. I use this on top of other metrics that I get.

Third, and to clear the air here:

  • PNI did not ask for this tool
  • PNI was aware at some point during the development cycle it was being built
  • PNI’s roster of developers, until late last year, were not all 100% aware that there was privileged access being dolled out to a couple node runners with this data. Some were aware, but not all - there were many skeletons in the closet, we found 3 new ones recently that we had to address in the migration. Anyway, we did not have capacity to add more folks into that pipeline, and we eventually killed off that data source. When our Data Warehouse is up and running, everything will be democratized. @iannn, I hope that answers your concern.

…and for the most contentious point, this is not saving PNI money as we did not request this tool, HOWEVER it was, in my opinion, adding visibility into a valuable set of data. It was also duplicating access to some data that may have existed in one or two places, but it was also surfacing other data points as well in a UI.

Is it the prettiest tool? No. Is it useful for everyone, No? Do I like it? Yes, which is I am voting for it. I understand if the community has a different sentiment.

My hope is that this tool continues to exist, and when PNI opens the floodgates with more data when our Data Warehouse is done, this tool, Poktscan, and others, continue to flourish.

On a totally separate but relevant note, it’s good to have disagreements between core team members of PNF and PNI. It shows that we don’t always align, and that’s healthy for the ecosystem. For example, I thought, and will continue to think that the Reddit marketing reimbursement proposals were a complete waste of time and money. Do I respect someone for doing it? Yes. Do I respect that the DAO voted in favor for it? Yes. Do I agree with it being voted on and passed multiple times? No, not at all.


@Dermot @ArtSabintsev @RayBorg

Hey guys,

Thank you for all the feedback and notes. Having an open debate is why we are all here and the effectiveness of the DAO. While I would have preferred to have this debate during the proposal process, I understand the community sentiment.

We have a lot of support, but we also have a lot of folks who just do not see the value, and we get that.

We knew with the turn of the market, that the usage of would slow down and there was a change the value would not be seen.

To everyone who submitted feedback. Thank you. It will help shape the DAO and future builders.

We are going to pull the proposal from the vote.

Thank you to all of you for the feedback.

Up and onwards