@Dermot
Re differentiated pricing: I have been interacting with COd3r on their v0 proposal for RTTM. I see no duplication of effort or working to cross purposes between my proposed work in this regard and theirs. As node runners, they are focused on per-chain rewards and the v0 code change needed to accomplish this. On the other hand, I am focused on v1 and approach a broader subject from a systems engineering perspective. My research will deal with
- how enabling per-chain, per-geozone and per-call differentiation can be accomplished at the same typle most efficiently (eg coding per-chain now only to have to code per-geozone later is highly inefficient)
- working out the implications on per-x rewards on per-x burn for self-staked apps and gateways
- working out the implications of per-x rewards for gateway sales process
- working out the implications of per-x rewards both on inflation, and the ability to predict inflation (both in maturity where mint<burn and in the subsidized transition period)
working out the implications of per-x rewards amongst network providers where reward rates may no longer be as predictable as prior to per-x rewards
- Re per-call-type pricing, whether or not the pricing model of competitors necessitates POKT moving in this direction.
I think this work aligns with the work COd3r is doing, because (1) it is more efficient to code once to achieve the final desired state rather than twice in steps and (2) it works out all the systems engineering issues that are being raised in the comments section to their pre-proposal but for which it is not really their area of expertise to address.
With your permission, I would like to resume this socket to start focusing on these issues, which will probably take a few weeks to a month, even if the socket as an ongoing concern following that is TBD.
Re staking (both on demand and supply side) - and possibly other “economic” parameters such as burning - , before addressing the larger picture of items that overlap with blockscience and other community participant, there are some experimental thoughts I would like to pursue that is completely outside the purview of others having to do with potential opportunities for applying concepts of “grandfathering” to the tokenomics - both at the transition from v0 to v1 and during the transition from v1 to maturity. This, e.g., might be useful as a tool to close clients in the sales funnel (eg "exisiting clients only have to stake x amount or will have x burn rate if they get in before the transition to v1, but will jump to y once the transition happens). For example, an effective way this might have been used in v0, but wasn’t because there was no mechanism to allow it, was to jack up the amount of POKT required to stake a new node at the end of 2021/beginning of 2022 to slow the influx of new nodes while grandfathering existing nodes who got in earlier. Looking into this area would probably be only a couple weeks before interacting with PNF to see if anything is worth pursuing further, as any such capability does come at the expense of code complexity to enable it.
Re dynamic pricing of applications, I know you listed “C0d3r and Ramiro to discuss in the first instance” but unless they have have laready accomplished this task, I would like put together pseudocode for this task; I believe it will be fairly straightforward. and should only take a week or two
Re interaction with BlockScience, there are two possible paths that can be taken.
- Blockscience is the sole determiner of what experiments to run and based on those experiments makes recomendations as to parameter values/ranges. If pertinent experiments that the community feels need to be done were not included then either BlockScience or the community would need to go back and run the pertinent experiments, which may completely up-end the recommended values/ranges, leading to delay, confusion etc.
- The DAO/PNF makes a list of experiments that it feels must be done in order to make judicious choice of parameter values/ranges/ methodologies (for dynamic-value parameters), and better yet has for each experiment a hypothesis on what they expect the experiment to show. Out of this complete set, BlockScience then makes their recommendations.
To me, the second approach is more efficient, and leads to less confusion and delays than the first. This is where I was heading toward as indicated in the future-work discussion within the monthly report. Please advise.