radCAD Base Model Discussion

2023-03-20 : After some feedback it was decided to change the focus of the model from V0 to V1. The updated model can be found in the 11th post of this thread.

The idea of this thread is to start a discussion on how the initial model of $POKT economics should be built. This model is expected to be created following the Ethereum economic model using the radCAD library. For anyone who wants to jump-in in the discussion, I advise to take a look at the cadCAD masterclass, to understand the main parts of the model.

The structures to be presented are only a proposal and subject to changes through discussion. This initial proposal is based on the calls leaded by @profish and with the participation of @msa6867 and @Caesar . For those already familiar with the calls (that were recorded but I-m not sure if they were distributed), you will notice that I made some changes that we can keep on discussing here.

The project, if it is appealing for the community will be open-source and provide an easy to use UI for non-technical people.

The Simple Model

I have outlined some parts of the model but other are still missing, like the metrics.

State Variables

These are the main variables that will be modified though cycling of the model, I have moved some to the Parameters sections because it seemed to be things that could be modified by stochastic processes (like the avg. DAO expending, or the avg. number of relays per session). I also divided the total staked $POKT into three categories since each has its own meaning.

System Parameters

These are the simulation parameters, some of these we want to explore their effects, such as the RTTM or the ABR, some others are more fixed (like Block Time). I tried include here all parameters that should control minting and inflation.


TODO: I have not created any specific metric, these should be calculated using the state variables and parameters. Metrics are interesting numbers to track that have no role in the evolution of the system, like inflation or node-runner profits.

State Update Logic

This is a basic loop of the state update logic that should provide information on supply growth. I think that we can avoid the Cherry Picker logic at this point and deliver a prototype that has the most simple metrics, such as inflation and (avg.) profitability. Also note that there is no chain or regional logic embedded, this is intended to be a macro view of the Pocket Network, then we can start build down into specifics.


  1. The simulation should start by setting granularity that should be able to go as low as a single session. Block granularity makes no sense in Pocket economics.

  2. Set Inputs: Here the main parameters are modified before any update takes place. Things like RTTM or $POKT price is updated according to sweeps and app/node/validator growth is applied.

  3. Relays Processing: This is simply the calculation of a random number of relays to be processed, based on the current parameters. The total minting is calculated using the avg. bin of the network.

  4. Validation Proces: Mostly a place holder, since there is no node-deep analysis, the validation process makes nothing but assigning minted tokens by validators.

  5. Update Supply: Here the total supply change is calculated, the total $POKT generated, how much is locked by the DAO (10%), how much goes to the total supply (90% = servicers + validators) and how much is burnt (if there is app burning). Also DAO spending will be here, as newly created $POKT.

  6. Calculate Metrics: Here all the metrics are produced,the operations here will depend on the metrics that will be defined.

  7. Update Variables: Time step, block number and variables.

  8. Loop…

The expected behavior of this model is to produce information on the supply given certain conditions of the Pocket Network, such as relay growth, node growth, RTTM planning, $POKT price etc. The model should be fit to produce the expected inflation, the profitability of different node-runners set-ups, the total network net earning (minted value - network cost) and so on. These are just a few I can think of.

So, what do we want?

This is a very rough model, nothing is coded, everything is possible and all of the above can be changed.
The idea is to discuss in the open how far the model should go and what should it include before making any line of code. The idea is to be clear on what can we expect to obtain in a given time frame.

Once we agree on some basic stuff and structures that should be included in a first version we can start to code or estimate how much effort would this modeling cost.

To be clear, this model is not only math and diagrams, the idea will be to produce an interactive model for the community, yes a UI hosted somewhere (hopefully).
So we would need to know which are the levers that you would like to pull, like being able to set a range of variation on the RTTM or the $POKT price over time. Anyone can ask for something, but keep in mind that models that are too flexible end up being too complex, so the “friendly” version should be kept simple.


Thank you @RawthiL for putting this together. I will review and comment/add thoughts later this week

1 Like

Thank you @RawthiL for this first stab at defining a radCAD model for POKT. First some top-level thoughts.

Regarding economics modeling (or any other modeling), my suggested approach is to build only to the level of complexity needed to obtain the desired objectives. There are three (at least) dimensions that determine development effort for POKT tokenomics/economics modeling:

Development Effort
low mid high
model consumer self: personal research other reserachers public
tokeomics vs economics tokenomics micro-economics macro-economics
complexity static - mean value static - stochastic dynamic

E.g., if only the developer and other savvy researchers will use the model, then not much effort should be put into UI. If we are only trying to answer tokenomics questions, don’t build an economics model to answer it. Most importantly, only use dynamic models to answer questions that can’t be answered by static models and only use stochastic static models to answer questions for which mean-value answers are not good enough.

My understanding of where radCAD fits into the possible modeling space of POKT is shown in bold italics in above table. Namely, a dynamic, micro-economic model of POKT that straddles the fence between being used by other researchers and used being used by the public. By comparison, last July I had started development of an excel-spreadsheet based model that was a static, mean-value, micro-economic model that straddled the same “other-researcher”/”public” fence. Both approaches are useful and can be used to answer different questions/achieve different objectives. Having multiple independent models also helps in the process of validating models against each other.

I think a good starting point is to gather in one place all the objectives, questions desired to be answered, problems to be solved etc, that the community may wish to accomplish through tokenomic/economic modeling. This is a brainstorming exercise soliciting input from as many different stake holders as possible (developers, node runners, gateway providers, Dapps, investors, etc). Next step would be disposition these against the type of model that would need to be used in order to accomplish the desired goal. Only then can we begin to answer such fundamental questions of what are the parameters vs what are the variables of a dynamic, micro-economics model such as that which would be built using radCAD. Just as an example, if a pressing need for the community is to understand the feedback loop between POKT price and node attrition or node expansion, then node count would need to be a system variable rather than a system parameter. If understanding that feedback look is not a needed objective, then the model can be greatly simplified by making node count a fixed parameter.

Second big-picture question: should any effort be put into building a v0 radCAD model, or should all effort be focused on developing a V1 model? Note that the state and logic shown below is focused on v0. I foresee building a radCAD model to be a multi-month effort and by the time it is ready there may only be 6 months or so left of v0.

The main argument in favor of building a v0 model is that it can be validated against current known system behavior and current existing static models (e.g., Adam’s spreadsheet model and my spreadsheet model)… This can then give confidence in a v1 model, for which no current cross-validation mechanism exists.

The main argument in favor of NOT building a v0 model and focusing all effort on building a v1 model is to make the v1 model available as early as possible so that it can be used for purposes of planning well in advance of launch of mainnet (e.g., setting app burn parameters, fishermen allocation, etc.). Early existence of a v1 model can also help shape the system tests that may need to be developed to ensure the code behaves according to expectation.

Some specific interactions of the writeup follow.


What is your idea for handling such “quasi-variables” (values that vary over time, but not as a result of the cycling of the simulation) I see two possible approaches: one, these are parameters set to a fixed value, as you indicate, and the simulation is run sufficient number of independent times with different parameter set to build up probability statistics. Or two, these parameters are fed into a single simulation exercise via a file containing a set of values which updates to next value in the series at specified time boundaries (e.g., once per day for number of relays per session.)

Price: I am trying to understand what kind of process could cause price to vary as the simulation cycles. The difference between price as a variable vs parameter is, I believe, the difference between building a micro-economic model and building a macro-economic model (because there are so many factors affecting POKT price besides what happens within the POKT ecosystem). In order to keep the complexity of the model from mushrooming out of control, I think price ends up being one of those stochastic parameters described above.

Servicer Variables: Perhaps this could be tweaked to be Total (and Total staked) Per Bin, Avg Jailed Percentage (is this even needed??), delete “Avg Bin Size”. Note that bins are needed only if we build a v0 model (see above discussion). The whole concept of bin goes away in V1. In its place we will need some kind of Servicer POKT Staked distribution to be defined (as I don’t foresee specifying a single mean value to be sufficient for modeling purposes).

Servicers Parameters: Note that a fourth servicer type is needed which might be called “dApp” to capture the servicer type that Vitaly advocates where a dApp is running a chain validator anyway and runs a POKT servicer for that one chain to earn some supplemental income at near zero additional cost.

Validators Parameters: Avg. Slashing: probably keep this here and delete from Parameters. Make it per validator type (cloud/BM/DIY). Slashing Prob: probably delete this here and keep as Parameters. Make it per validator type (cloud/BM/DIY).


RTTM: Whether this is a parameter or a variable depends on the duration of simulation needed to answer whatever is the objective of running the simulation. If less than one week, it can be a parameter; if greater than one week but less than one month (assuming SER passes) it will be a valriable, and Target Daily Emission is a fixed parameter. If longer than one month, Target Daily Emissions itself needs to be input as a time variable. Again, this is for V0. For v1, I think we need to build the model to allow per-region per-chain RTTM, as I anticipate that being one of the primary research areas the model will be useful for.

Allocation: For v1 we will need fisherment allocation

App Burn: For v1 will need AppBurnPerSession and AppBurnPerRelay (which may be fixed or may follow a formula the way RTTM is). For v0, will need some kind of parameter by which to input value of some sort of off-chain burn mechanism


I will interact with this portion a bit later. In the meantime one note:

Granularity: I can’t imagine granularity is something to be set. To me it is an upfront design choice that is hardcoded. Session vs day vs month would, I think require different code sets. I would love to see the counter view fleshed out.

1 Like

The value of the model is that is very easy to change from v0 oriented to v1 oriented model. The main mechanisms of Pocket wont change much in V1, some actors can change and new ones appear, and some minting mechanics are modified, but the variables and metrics that we need to track will remain.
Right now V1 will be launched with a single permissioned fishermen and no tokenized portals, besides a change in how relays are distributed (outside the proposed model initial scope) and removal of stake weighting, the minting logic will be very similar.

Both of them are possible, the first one only require the simulation to be run for each set, the second requires the coding of a mechanism that updates them. This is like the implementation of price variation or staking incentives in the Ethereum model.

Yes it should be a parameter, I will modify that. I do not intend to model $POKT price at any extent. I leave that out and to be set as a fixed value or to controlled by any kind of external process.

I propose total staked and avg. bin size since I do not expect to track nodes stakes so closely (initially). I mean, we could separate the total stake into multiple bins and calculate then the minting based on how much is staked on each bin, but it should be redundant. Stake weighting will not affecting income at this level of abstraction and the metrics for node-running revenue should not be different (POKT earned by POKT invested should be the same for each bin).
The existence of “Avg. Bin Size” is only for calculation of the total minted. In a V1 scenario it can just be ignored (not used). Also, I think that the whole concept of stake weighting (among other things) should be gone in V1 if we want to maximize the diversity of the network, but this is other subject that I will later address in the V1 github.

Total Jailed makes no sense at this level of abstraction (neither for validators), i will remove this for clarity.

This is an edge case I think, they are welcome in the ecosystem but I think that they are not the main actor. Also, the model as initially conceived, has no granularity on different chains. This “dApp” servicers would be interested only in projections for the specific chain that they use, as they will only stake for that one.

Avg. Slashing this was duppled, as you say, it should be a parameter not a variable. I will remove it and separate it into those categories.

I expect the model to be run for months normally. Maybe the parameter should be Target Daily Emission, and then set RTTM as a variable. The target daily emission is a value controlled by an external mechanism. Also, it is a value that is interesting to modify over time, just to answer “what if” questions or build models past the SER running time.

I also expect this to happen, but I would not go that deep in an initial model. There many complex stuff that should be discussed specifically for that. Probably fist in the form of a V1 github issue.

New actors can be added later, I dont think that we need this right now if we want to go up to app burn. Also fishermen will start as DAO owned actors, they will have no effect on network-wide minting initially.

For v0 I just used “ABR” but that is not clear enough, it seems like a flag.
Maybe we can use USDRelayTarget and some logic?

No need to be hardcoded, I imagine two ways of doing this, model everything down to a session and if a larger step is chosen (like a day or week), simply run many sessions per model update (will depend on computational burden). An other could be having two models, a max granularity model for sessions and a simplified one for larger periods (days or weeks). At most we will need to have two different models and only for minting process which require single node earning tracking, all simplified models are just larger draws of the same fixed distributions.

1 Like

In this case, I think it would be more fruitful to drive this conversation toward a world of V1 where rewards will be dictated by new parameters. We should start considering V1 system variables such as

  • Fisherman allocation
  • Portal Allocation

Servicer salary variables

  • SalaryBlockFrequency
  • TotalAvailableReward (per relay chain, geo zone)
  • MinimumTestScoreThreshold
  • UsageToRewardCoefficient

This could be done if we all agree to focus on a V1 model of the network and drop any V0 compatibility from scratch. If so, the model will serve as a tool to model long term only.

1 Like

I do not share this same level of confidence that it is easy to change from v0 oriented to v1 oriented. As far as architecture goes, I suggest orienting the model to v1 and then, if desired, add whatever actors you desire (coupled with setting unneeded v1 actors to appropriate null values) to enable it to be used for v0 purposes. But the focus should be on v1.

This is, in part, because we haven’t figured out how to incorporate the fishermen and portals into the economics construct of Pocket. Figuring this out should be one of the main focuses of “why spend $$ to build this model”. So to say they paid fishermen and portals are not there day 1 is to put the cart before the horse. So stand by what I said earlier, architect to v1; add as necessary to make it work for v0, if any v0-time-frame objective is worth the $$ to add v0 capability.

The second option gives more flexibility in use, but at an incremental additional cost to first option. If the cost differential is small, we can go second option. Otherwise skip the addition; the capability can always be added down the road

This level of detail can be deferred to later, but see the following.

I am not sure I agree. I thought modeling node runner profitability was one of the goals. If that is the case then modeling that per bin (in v0) may be important. But I also understand that this could still be obtained via spec’ing “total nodes” and “avg bin size”. And I also am aware that focusing to much attention on v0 details is not compatible with what I said earlier about focus first on v1; add v0 capability as time and $$ permits

Regarding how to spec the v1 protocol in this regard, this is a very important consideration and outside the scope of this post. I think the topic is important enough to warrant DAO research/governance discussion that is more visible to non-devs than can be achieved just with the v1 github. As long as the code is such that there is a parameter value that effectively turns off stake weighting (e.g., ma stake for weighting = minStake), then it becomes a DAO governance decision. I think this course of action is more prudent than scrubbing it from te code.
Let’s include this in discussions in the discord wg along with app burn.

Regarding how to model the v1 protocol in this regard, it is a absolute must-have to model stake-weighting. For example, modeling the outcome with and without stake weighting may provide invaluable analysis to bolster the case for setting the parameter value such as to turn off stake weighting.

That brings me to one of my wish lists for this model. (1) modeling profitability per node class (whatever node class ends up meaning). (2) Modeling attrition/expansion of node count per class as a function of the profit/loss. (3) Run simulation to see how network evolves (e.g, if max stakeweighting is set to 10x then show that over several months all small-staked nodes get squeezed out and only 150k-staked nodes remain.

While this class may be an edge case today, they would not be if Art’s proposal to slash rewards to one-third current values was passed. Vitaly has been very vocal about this category and it’s importance in a right-szied network. I think it is worth the add.

Hmm. I’ll have to think about this a bit… It may be fine at this level to not model specific chains, but I think it is important to somehow specify how many chains a node class supports. I do not think we can presume that all nodes service 15 chains (or however many are allowed in v1). If the chain-validator class exists it could be set to 1 chain (and hence 1/15th the reward of other nodes) and zero cost. As the system evolves, it still ends up dominating the servicer landscape if rewards and/or price drops too much

See above discussion re fishermen. In this case, however, I agree that in terms of phased development, this could be kept global in first phase and add differential capability in a next phase. As with stake-weighting comment above, this should, in addition to being a v1 github issue, have visibility also in more non-dev-centric forums. Can also discuss in our wg

I concur that fishermen in all their functionality as actors are not strictly needed to model the effects of app burn. Which brings me to my original top level point. How do you determine what level of complexity is needed until you specify what questions you are trying to answer and what goals you are trying to achieve by building the model.

Regarding fishermen: not having them as actors in the first phase of a phased development does not prevent defining a DAO-controlled parameter called FishermenAllocation that takes a slice of the pie. Adding that to the emissions pie is near zero cost and complexity and can be included in initial phase.

For v1, the two current parameters (burn per session and burn per relay) plus at least one parameter for adding to emissions via burn-and-mint. For v0, just keep as a placeholder for now and (if we model v0) we can circle back to this pending our wg discussions.

Note: silence on all other portions of @RawthiL’s directed reply above may be taken as the equivalent of “copy and concur”

1 Like

That would be my suggestion. If adding v0 compatibility is really not that difficult it could be added as a development phase just like any other development phase (e.g., fishermen and portal actors)

Development phases may only be a week or weeks apart from each other, not months, depending on complexity of the capability added in the phase.

1 Like

Well I think that I will re-write some things in order to align them to a V1 model, including actors and changing some variables.

I will create a new thread to discuss some important stuff of V1 node running design choices that should be reviewed by the community and to also propose some mechanism to ensure in-chain decentralization (up to some level).

There is no incremental cost besides coding what we need, if no update mechanism is coded, then we reuse last parameter and carry on. I dont think that this is a problem.

I will soon post a new topic to analyze fairness and distribution of nodes in the network. We can jump in there to discuss stake weighting also, since how I see it it will have no use.

These points will require some work I think, however they are all part of stake weighting. Lets first discuss if we want that in V1 (in other thread).

This will require the partition of the model into chains. It will add some complexity but I think it is just transforming some variables into vectors and partition the data by chain. Not that complex, but more work indeed.

This is one the points to discuss, the new thread will tackle this also, as think that stake weighting, chains by node and geozones by node are all part of the same problem.

I dont get this right. Validators are completly different actor, they would not be calculated by-chain.

This is what we are trying to do here. We do not have all V1 defined, but we can start by coding what we know that will be implemented (like app burn). Probably other actors and features will be enables over time, so we will have more time to code them. The important thing is to set the foundation in a way that we can build V1 on top of it.

Agree, fisherman income nuances can come later, at first they would only be represented as an slice of the total minted.


Sounds good. I look forward to the new thread.

As long as first phase has a knob to debit a slice of the pie for fishermen for the purpose of calculating the remainder available for Servicers, that should be fine. Then add fishermen as ators in a later iteration.

The utility of modeling attrition/expansion of node classes based on profit/loss conditions goes well beyond any one use case. Agreed that it may not be first phase, but it should be a goal.

I think you missed what I was saying re adding a “dAPP validator” node class. While modeling at a chain level may be a goal for other reasons in later iterations, it is not need in order to model this new suggested node class. Without modeling chains, we could nonetheless specify how many chains each Servicer node class serves and use that as a global multiplier in calculating rewards per node class.

1 Like

Well, after I while I got time to update this thread. I will post this here and place a link in the initial post to this one, otherwise the conversation will lose meaning.

The V1 Model

State Variables

Since in V1 the number of actors will increase from two (Servicers/Validators and Apps) to five (Servicers, Validators, Fishermen, Portals and Apps), I gathered them under the category “Pocket Network Actors”. This category does not have a programatic meaning (currently), its only to mark this difference from the Eth model which had a single actor.

  • The POKT Variables now include what I think are the economic variables that are not restricted to a single agent. These variables are updated based on the agents behavior.
  • The DAO Variables is a single variable, the DAO treasure, no changes.
  • Actors Variables:
    • Servicers Variables: I included new variables here, that are interesting to track, such as the avg. reward and avg. relays. I also included the new V1 variable, the Avg. Report Card Score.
    • Validators Variables: I removed the slashing from here, as the slashing amount and probability are in fact parameters. They are not updates as a consequence of the network evolution (in this model at least).
    • App Variables: Added the avg. burn, as it is interesting to see how changes in the burning parameters affect this value.
    • Portal Variables: A V1 native actor, there is not much clarity on how this one will interact with the rest of the network, but the traffic will probably something interesting to track.
    • Fishermen Variables: Another V1 native actor. The amount of “measurments” that it will be performing is an interesting thing as they will define their requirements (I think).

System Parameters

The number of parameters is increased from V0 to V1. I tried to put the new parameters in the same categories as the variables.
The main changes/additions are:

  • POKT Parameters:
    • Added “UsageToRewardCoeficient” and “TotalAvailableReward” that should control minting (or part of it).
    • Added “SalaryBlockFrequency” which determines the number of blocks between minting (I think…).
    • Added “SessionBlockFrequency” wich is the oficial name of how many blocks by sessions are there.
  • DAO Parameters: Added the “DAOBlockReward” parameter that seems to be the update from the DAO allocation of V0.
  • Pocket Network Actors:
    • Servicers Parameters: Added a burn probability parameters, as a burn mechanism will be in place in V1 (related to the score cards).
    • Validators Parameters: Added the “BlockProposerAllocation” with is the new version of the validators allocation.
    • Portal Parameters: Only the cost is in place, I do not understand them enough to add more stuff here. Anyway, I leave the placeholder for them.
    • Fishermen Parameters: Idem Portals.


Still a TODO, but they are rather agnostic from V0 or V1 I think…

State Update Logic

The state update logic in V1 is a little more complex, but most of it is from the addition of new actors. Fortunately we can keep some actors “off” until needed (such as portals or fishermen).


  1. The simulation should start by setting granularity that should be able to go as low as a single session. Block granularity makes no sense in Pocket economics.

  2. Set Inputs: Here the main parameters are modified before any update takes place. Things like $POKT price is updated according to sweeps, emission parameters are updated and actors growth is applied (staking/unstaking). Most of the ad-hoc updating is done here.

  3. Apps Processing: Here the number of relays to be processed are randomly selected using the application parameters.

  4. Actor Processing: All the actors can be updated at the same time once the number of relays is defined:
    a. Portal Processing: Calculate average number of relays flowing and rewards.
    b. Servicers Processing: Calculate average number of relays flowing and rewards.
    c. Fishermen Processing: Calculate average number of measurements done and rewards.
    d. Validation Processing: Calculate average producer reward.

  5. Update Supply: Here the total supply change is calculated, the total $POKT generated and burned (by apps and Txs). The total supply is updated and the DAO treasury is updated.

  6. Update Variables: Time step, block number and variables.

  7. Calculate Metrics: Here all the metrics are produced,the operations here will depend on the metrics that will be defined.

  8. Loop…

A note on V1 economics…

There a lot of unknowns around how all the V1 actors will play together. I have tried to understand how all of them will interact but I have found very little explanation on the subject. We are trying to model, on our own, how we would like V1 to work (in economic terms), however it will be great if we can have some feedback from PNI on this subject.

1 Like

Thanks @RawthiL . I will review this shortly

1 Like