Compensation Structure for DAO Contributors

I want to resurface this idea because I think it is the first clear and cogent deliverable from this conversation. Having the EVA structure in place would allow for reimbursements to segment recouping hard costs and pair that with additional milestone bonuses in a way that would both help reduce sticker shock (spreading the ask out over time) and allow for many different contributor types to benefit. Teams like @poktblade and @RawthiL 's get the added value of impact measurement, and core contributors like @deblasis and @Olshansky can participate in a meaningful way above and beyond the requirements and compensation of their full time position.

I’m going to ping @shane to rough out some ideas, and see about getting a pre-proposal together.


Hi @Jinx , am curious about the scope of this post. “Compensation Structure for DAO Contributors” has a much bigger scope and encompasses all kinds of comps. And I believe that the community discussions around that have been great. Solutioning (getting into specifics) is the next phase.

Your proposed solution above addresses “reimbursements” only (if I understand it correctly), which is a subset of DAO Comp as a whole.

So my question again is, what is the scope and the agenda here?

Are you thinking of breaking this down into smaller subsets (reimbursements in this case), solve them one by one basically?


Correct, I started the topic as a larger conversation instead of a preproposal because I expected that multiple proposals might be generated from it, and I outlined some of those goals in OP:

The EVA idea is one of those, and we’ve started work on a draft for it. But there are still a number of open questions. My ideal outcome would be to see some working models created, with 2 or 3 proposals around them to be ratified by the DAO.


Thanks @Jinx and @shane for leading this debate

Thank you also to everyone else for your contributions to this debate.

No different to everyone else; we at PNF have been thinking about this problem for a while too. And we have some suggestions to bring to the mix…

Why valuing public goods is hard

We cannot reduce human ingenuity to an algorithm.

A valuation analysis of a public good is an inherently subjective exercise. Instead of attempting to reduce 100 different variables - from time spent, to location of the team and so on - into a simple equation, we suggest that we focus on improving the language and tools we use when we debate.

As per Aaron Dignan in Brave New Work:

No formula is going to sufficiently capture the complexity of a real workforce. Only transparency, dialogue and judgment can make sense of what is fair…

Impact is the metric that matters

While we want to have lots of people spending lots of time on Pocket related matters, if we want to create a meritocracy, such effort can only be fairly valued on the output it results in. The impact of our collective contributions is all that matters to the future of Pocket. And we should value each other’s contributions accordingly. That doesn’t mean that something very valuable should receive all of the DAO’s treasury, but it does help us understand how to value each contribution relative to each other and what we need to incentivise. Once we understand where the contribution fits in the grand scheme of potential contributions, we can then decide how to calibrate the reward based on how we think about trade-offs, such as needing to incentivise more contributions but also needing not to overpay anyone and keeping enough POKT in reserve to continue funding more impactful contributions over the lifetime of the project.

Did the contribution advance our priorities as an ecosystem? If so, how impactful was it? Did it get us to v1 quicker? Generate 10x more paid demand? Bring in 50 new contributors? Or something smaller, but still impactful?

We have prepared a framework for valuing each other’s contributions using such a perspective on impact. Please see for yourselves here.

PNF Impact scorecard

The scorecard includes the following five factors:

  1. Utilization - how widely used is the contribution? 10/10 likely means that it is the most widely and frequently used product in the whole ecosystem
  2. Measurable benefit - what is the benefit, and how deep and long-lasting is it? 10/10 would mean the most valuable and long-lasting possible contribution to the ecosystem.
  3. Ecosystem-wide significance - how detrimental would it have been if the contributor didn’t make this contribution? 10/10 would mean a truly game changing contribution that dramatically changes the course of Pocket’s future.
  4. Impact of keeping the contribution free and open - could the contributor have easily kept this contribution closed source, and do they benefit from this contribution in any other way? 10/10 would mean the damage of keeping this contribution private would have had a incredibly detrimental impact on the ecosystem, particularly when there was an alternative paid business model available.
  5. Novelty / innovation factor - it is imperative that we reward creativity and experimentation. 10/10 would be something completely de novo and extremely innovative that involves a lot of risk and ingenuity.

Please bear in mind that this is ultimately a work in progress and the scorecard is merely a tool to help align our thinking. It is not an exact science.

The benefit of constraints

Most proposals should fit within a reasonable range, so we believe we should agree on an appropriate range and reserve debate for genuine exceptions.

We should apply a soft cap to proposals that the DAO should only exceed in exceptional circumstances. For example, we believe that any one complete piece of work shouldn’t receive more than 4% of the treasury / $300k, so that we have enough left in reserve to fund many more game-changing proposals over the life of the DAO. Further, grants at the top end of the range should be reserved only for truly exceptional contributions that meet such an incredibly high bar that they should be a rarity in practice. The more contributions we have, the easier this will be to gauge.

Nothing is a replacement for using your own judgment

The scorecard should never be the sole arbiter of the truth. If it is to be useful, it should empower all of us to use our judgment. This is why you will see questions following the scorecard that ask you to consider other factors such as whether the score you gave in the scorecard appears to result in a reward that is too high / low in comparison to previous rewards, the actual work done, and so on.

While it is largely the role of PNF to maintain a coherent public strategy for managing the DAO’s treasury (something we are working on, and which is why we suggest the relevant soft caps that we have in the scoredcard), it is the role of the DAO to determine what impact means for this community and to opine on value creation. Such a subjective exercise benefits from aggregating our collective perspectives.

We hope that many active contributors - including groups like GRIP - will submit their own versions of the scorecard to each debate proposal. And that we can then work together to iron out our differences and align around the fair share of the DAO’s treasury to award for the proposal in question.

While such an exercise will always be inherently subjective, aligning on similar frameworks for understanding value creation should lead to richer debates and enable us to learn from each other while doing so.

Next steps

In an ideal scenario, we can crowdsource improvements to this framework using this powerful hive mind of ours, as well as a growing bank of precedents as we learn from more community contributions. We will always have our differences, but at the very least we should be arguing about the same things and using similar language to do so.

We look forward to hearing everyone’s thoughts and working together to reach alignment on a more productive way forward.

We have shared the scorecard already with the Poktscan (cc @Jorge_POKTscan @michaelaorourke ) and Poktfund (@poktblade @Poktdachi ) teams, and hope that this approach will be used to advance those conversations too.


One more thing! @b3n plans to share more details on the other programs that PNF is launching soon, which should ameliorate many other concerns around the proposal process. Namely, the friction of paying a contributor upfront, the friction of getting paid retroactively for smaller impactful contributions (eg sub $25k) and an RFP process for identified ecosystem needs.


This is a phenomenal stab at organizing and systematizing an approach to a very complex subject, Thanks


This is helpful. But it raises the question of ‘who scores the scorecard?’

I feel that a useful addition would be a 3-person non-binding committee that offers its scoring, alongside the proposer and any community member who would like to contribute their own version.


Thanks @Dermot (&PNF) for putting this together.

So basically we are taking a more qualitative approach (instead of a quantitative) to arrive at a valuation model.

Intellectually speaking, I would like to further clarify your following lines:

It depends on what kind of public good we are talking about.

The air we all breathe can be considered a public good and I agree that it is hard to quantitatively value air and assign a number.

But on the other hand, public blockchains and protocols are public goods that can be quantitatively valued, and very conveniently because the metrics and their definitions are available in the industry today. Referring back to the debate I started on den about Pocket Network’s financials. A followup is due there.

Protocol is a public good, DAO is a non-profit, foundation is a non-profit; however the entities that are building on them most likely would be for-profit.

We could come up with a fully quantitative valuation model for the for-profit entities building in the eco-system but we don’t have the means to go and audit their inputs, that most likely would be “cost based”. I mean nothing is on-chain and it would be hard to standardise and templatise. I guess we could but that would involve a lot of trust and conflicts.

The challenges above apply to “reimbursement” type compensation.

And therefore I agree with the “IMPACT” based evaluation model.


@Cryptocorn has already asked the question about who will do the evaluations. I agree with the committee approach. I believe @Cryptocorn , @zaatar suggested PNF + GRIP + DAO member + Community member, or something similar.

And I sincerely recommend proactively managing possible centralisation/concentration, abuse of authority, unhealthy coalition formations, etc through timely rotations and other measures.

More on this here and here . Not trying to be a cynic, this is just risk/conflict/COI management.

So committee’s role will be to sanction or not sanction the grant? Or to decide whether to promote the ask for a DAO vote or not? Because those two are very different in terms of empowerment, and also in other factors such as time and efficiency.

Ok, I arrive at my favourite topic- how will we budget this?

Hypothetically, what if there are 15 requests and each at 4% of the treasury? That’s 60% of the treasury. It’s improbable but not impossible.

Moving forward, we would ideally want more and more high quality proposals. The responsible way forward is to factor those low probability scenarios as well.

And therefore I re-appeal to PNF and to the DAO- we need budgeting and planning.

The DAO treasury is not infinite and therefore will require rationing at times, which will only happen if we do budget planning.

Reiterating that my comments have no connection with the ongoing reimbursement debates.

Thanks to you and @b3n for the above and thanks for reading.


Thanks, @Cryptocorn and @Caesar, for bringing up these great questions

First things first:

We hope that the proposal authors fill the scorecard out in the first instance to justify their request. And then, every community member interested in participating will challenge such requests by completing their versions of the scorecard - it’s only five categories - and asking relevant questions to decide on the weight they will apply to their evaluation.

This type of process should lead to a better debate, as right now, while the community challenges requests, the debates are very abstract and generally quite disconnected from the impact in question. And they resort to ad hominem attacks all too quickly. Additionally, impact - until now - has been banded about in a very loose fashion by proposal authors. As we are an ecosystem, not a private for-profit entity, we don’t have just one stakeholder (eg shareholders) to serve along one primary metric (eg $ profits). Therefore, having multiple frames for considering impact should help get us closer to the right answer.

As there can never be one “exact” answer, more perspectives will help get us closer to this community’s version of the truth, which is why we do not want to delegate this authority to a smaller subset of the community, and actually desire more participation. This is why we advocate for an open process involving more of the community, with PNF in the role of facilitator providing guidance and direct non-binding recommendations where helpful to do so.

On the other hand, for some of the new smaller grants programs that PNF is planning to implement soon, a representative committee formed of PNF plus some community members will be beneficial to shine some light on the practices of PNF and to add more legitimacy to the process. More info on that to share soon :slight_smile:

Ah yes, you are correct that blockchains can be quantitatively valued from the perspective of an investor seeking a $ return. But whose perspective should we consider for Pocket? And on what basis if we aren’t taking any equity or tokens in return for the $ and time we give to support the public goods in our ecosystem? As the DAO is not a monolith, and our perspectives and incentives are manifold, any public goods valuation will be inherently subjective and not directly quantifiable.

This is a debate that I have had with many people in the community at this stage! @Jinx and @shane in particular

I agree with this point for RFPs where work is determined upfront. However, in the case of a retroactive reward that was never fully specced out by the community in advance, we cannot give out rewards based on costs for two reasons:

  1. Paying for hours completed rewards inefficiency. It’s similar to lawyers being incentivised to do more hours than they need to. Further, DAO members have no ability to challenge whether or not someone did those hours, or even if all those hours were necessary or not. It could also lead to some strange edge cases whereby a work that required a lot of work, but delivers a small amount of impact would get paid a lot more than something that took less time but was much more impactful

  2. My quote above basically sums this up. If we agree on the right frame for “impact”, along with the revised mission/vision/priorities outputs from DNA, we will have a powerful bat signal to attract contributors near and far to deliver as much value to the ecosystem as possible to drive the priorities we care about. Lots of hours spent on something matters little. It’s all about impact.

And in good news, it seems like we are in agreement!

We agree. We are working on it and will share more thinking on it soon. You will see that there is a question in the scorecard under the heading of “DAO Budget planning” that asks questions / poses considerations such as:

  • How many more similar grants to this do we expect to funding in the next 2 years?

  • Can the DAO afford to fund similar grants in the future at this level of compensation?

  • If this is a unique extraordinary one-off, consider for a larger grant

  • If we need to fund many more of these types of grants in the future, ensure that the DAO has sufficient budget going forward to fund these types of contributions.

  • Avoid allowing exceptions becoming the rule

Ultimately, we see it as core to PNF’s mission “to bolster the efforts of the DAO” by stewarding the DAO’s thinking around these points, so it’s a great point. And we are very much in alignment.


This is an interesting idea. Perhaps an example scorecard would be helpful. If you were asked to create a scorecard for your role with the DAO, what would that look like? I would be willing to provide my evaluation to test this concept out. Care to give it a go?

I agree 100%. In the spirit of leadership starts at the top, perhaps the DAO core team should lead this idea with each member creating their own scorecard? Is that a fair ask? The PNF core team is also meant to provide impact, correct?

1 Like

The scorecard was created to help facilitate conversations around large proposals and retroactive funding requests. It was not designed to be a school report card.

PNF is required to provide a quarterly transparency report and part of this will include reference to our own impact.

1 Like

Is there an example of a scorecard anywhere? Isn’t the purpose of this forum to facilitate conversations around large proposals and retroactive funding requests? Again, an example would be interesting to see.


This is the template I shared in my previous post - PNF template impact scorecard for DAO grant proposals - Google Sheets

We have workshopped this template with different builder teams such as Poktfund, Thunderhead, Poktscan as well as the protocol team who are working with external contributors via bounties at the moment - and soon other programs such as Sockets and RFPs.

Every builder that has asked for a retroactive reward has been asked to complete this template and share their version with the community for the community to assess themselves. This has yet to happen, but we understand that it will happen once updated versions of currently active proposals are shared with the community in the coming weeks.

Is there a particular funding request that you wish to apply this impact framework to? FWIW, GRIP is not a public good in the sense of LeanPOKT or Geomesh etc, so the template I shared is not directly appropriate, although viewing the cost/benefit of the proposal through the lens of impact is a useful frame, as Ben and I have said in different ways.

I would love to see your application of this framework / challenge to any of its parameters wrt any of the active funding requests.

As per my previous comments, PNF should not be the sole arbiter of the truth wrt impact:


I do like the spirit of this.

I was thinking about GRIP when I first commented. But I was also thinking that PNF (and individual team members) should be asked to provide something similar for the community to discuss. The PNF core team represents the largest allocation of the DAO’s treasury. So it seems to me that if a scorecard is required (again, I like the idea), it should be required for all allocations over X. Meaning it should also be required for the funds used to pay the PNF team.

1 Like

Just to clarify, the PNF core team isn’t paid from the DAO treasury. PNF has its own treasury from the genesis allocation of the project.

We appreciate the challenge in any case. We want everyone to be mindful of showing their impact. In terms of how we show our impact and do what we do, I refer to the Foundation section of the forum where you can see our roles and responsibilities, roadmap, budget and various transparency updates. We will have more detail to show in terms of our impact to date - as well as how much of PNF’s treasury (not the DAO’s) that we have spent to date - with our first transparency update due at the end of next month. This will include the metrics we track to measure the impact of our work, and is very much open to feedback from the community.

Further, once we define our “DNA” as an ecosystem in the coming weeks, we also plan to embed and adopt a “DAO OS” which will include practical measures so that all of us - including PNF - can publicly live our values and work together for the maximum benefit of the project.

Defining and embedding how we work in public is key to the transparency component of our values as we realise that although many people in the community have expressed the value of our work to us directly, this isn’t visible to everyone, such as perhaps yourself.


Thanks for the clarification, I need to take time to better understand that.

I like the sound of this for sure. I’ll be looking forward to getting to this point.

I also appreciate the work you and the rest of the core PNF team is doing. That’s not what I was challenging. My point was that everyone should be held to the same expectations. Further, those expectations should be clear, in writing, and objective - not subject to the opinions of a few people.

Thanks for addressing this @Dermot. I like the direction and I’ll look forward to seeing this come together.


Hello All,

I’m just now seeing this and have some thoughts.

My thinking is that by focusing on “the how we work” for scoring compared to “the what”, the results will be better balanced.

This is only possible because PNI/PNF have outlined clear objectives for a given quarter already, so the results everyone are working towards are already bought into.

And this is only possible so long as clear objectives and key results are published, and bought into continually on a regular basis.

If people don’t agree with a given set of OKRs that’s a different debate, but lets assume we all agree at the results are good enough at keeping everyone directionally focused.

Here I will focus on the impact factors, because I believe with well designed impact factors, most other calculations are taken into account, whether directly or indirectly.

Another benefit of focusing on impact factors is that a tool can be created to have the community quickly weigh in on proposals/contributions. That tool may also have its own set of risks and design considerations so I will leave that out here. But it could be as simple a scorecard app, or as web3 as a prediction market like system.

In given seasons of pockets lifecycle, you can adjust weights to the impact factors to signal importance, and incentives behavior’s.

I think this is extremely valuable. Some factors are more valuable at certain times then others, and quarterly (and very rarely ad hoc) that should be understood and made public.

Anyway, my proposed impact factors.

The are structure as 3 guiding values, each holding 2 competing forces. So 9 total.

This design is because we have to acknowledge the fundamental tension (which I think the authors of this have done a good job at btw) of a DAO trying to be productive without centralizing.

The tension is in efficiency and resilience. resilience being a function of diversity and interconnectivity.

Long term we want more people contributing with self sustaining business models, but there are limited resources.

Here are the impacts I think that would be a good starting point.


This is my favorite one because it ties directly into the what we need to do (OKRs)

Lets use leanPOKT as an example.

If the objectives were published for that quarter with the highest prioritiy being

Cut network infrastructure costs by 80%.

Then the relevancy of this proposal not only demands a high score for potential reward, but a high score for active vetting of the solution to ship.

Whereas if that same month, someone proposed to do a mareteing campaign.

Although that is important, it is not as relevant to the needs at that time.

Thinking with this guiding value in mind, and evaluating based off it I think will do wonders for proactively aligning the community.

The more this is weighted the less time will be spent on irrelevant discussions.

The competing forces that sit within it are speed and cost.

These ones are self explanatory.

We want cost to be as low as possible, but the faster we need something done, the more its going to cost, in most cases at least.

There are additional considerations of cost as well that are interesting. Like QA time from core team.

By fleshing this out we will understand deeply the hindering costs in terms of time, opportunity, and finite resources.

By increasing the weight of speed, you will incentives smaller more iterative contributions.

By increasing the weight of cost, you will incentives larger more collaborative/innovative contributions.

Cost could be renamed to something like “frugality” or “affordability” maybe also.


Diversity does not mean completely different things. Diversity means a healthy distribution of teams working on similar solutions.

For example we still want more node runners, even though there are already node runners.

The same applies for all tooling and iniatives in the ecosystem.

Including diversity as a weight within the impact factors will open the door for healthy competition within the ecosystem.

By weighting diversity more, you will get more competition and novelty. By weighting it less you will get more focused contributions.

The competing factors are innovation and synergy.

Innovation is explained well in the exisiting impact factors.

But I believe that needs to be paired with synergy.

The contributions should seek to build with or upon one another in ways that make sense for the compounding value of this networked ecosystem.

Without synergy, you may get a fully fractured community where nobody is incentivized to connect, learn and grow.

Weighting innovation will incentivize autonomous novelty, while weighting synergy more will incentivize interoperability and ecosystem wide UX.

  • Can you tell how fun it would be to make the impact score card a dynamic weighting system? seems really powerful to me.

** Accessibility **

Accessibility seems like synergy in some ways, but the difference is in onboarding.

A highly accessibly contribution, results in dumb proof UX or grade A documentation/support resources.

It also results in increased capacity to be improved upon, whereas synergy leans more towards “works well with”

The competition forces here are Efficiency & Collaboration gains.

Effiency gains save time, resources, and tend towards disqualifying potential participants, much like how AI will displace many jobs.

Collaborative gains tend towards creative perspectives, culture formation, and universal basic contributions such as rallying around a narrative that everyone understands because they were taking along the whole process.

Weighting efficiency more will get you faster optimizations
Weighting collaboration more, will get you broader interest and potential energy leading to things like new community members rising to stardom, or broad guerilla marketing like viral word of mouth stuff.


  1. I believe the use of a score card/rating system is very important for broadening community contributions
  2. I believe the use of dynamic weighting and competing forces is important for finding how to walk the line in any given season of the organization
  3. I think those 9 factors are a great place to start, but could probably be improved upon. Except relevancy, i think that is a must.

I appreciate this well thought out contribution. There are a lot of important points to consider here, but a big standout to to me is the focus on established OKRs within the quarter.


Thank you.

And yes I think a system like this will result in

Discussion clarity, and buy in for the objectives and key results.
Alignment around the cultural building blocks, the values.

Which can unlock the real potential of a given DAO.

I could talk for days about this. if anyone wants to chat more just lmk.


Hey @Patrick727, thanks for adding your valuable perspective to the conversation!

First off, as I’ve said before:

So we’re fully in alignment in realising that there is no perfect algorithm and that we need as many perspectives involved as possible.

Following up to some of your specific points:

What do you make of the current version of the impact scorecard? See here and the accompanying explanation above -

This relates nicely to the recent post from @b3n about the new “Eras” structure for the DAO - A new Era in DAO Operations

How do you think this point dovetails with the impact factors of ecosystem significance and novelty/innovation factor? There could be a nice interplay here

This also seems like a mash up between the utility and benefit impact factors along with a consideration of our DNA