Change from Trailing 30-day Average to Trailing 7-day Average for calculating RTTM


Moving from a trailing 30-day average to a trailing 7-day average when inputting relay average into the calculation of RTTM will, when combine with the ACCURATE proposal, lead to less deviation of emission rewards, both over and under, from the target amount.


Moving to a weekly cadence of updating RTTM can still lead to large over-mint or under-mint conditions if the averaging method to calculate relays is not simultaneously updated to be more responsive than the trailing 30-day average that is currently used. It is suggested that in combination to moving from a monthly to weekly cadence in setting RTTM, the averaging method be updated from a trailing 30-day average to a trailing 7-day average. 7-day averaging provides sufficient smoothing of random peakiness in relays while being responsive enough to larger trends of relay growth to prevent most overminting.

The following spreadsheet provides analysis of the overminting that occurred during the month of October (prior to the chain halt) due to relay growth in October compared to September averages, as well as the overminting that hypothetically would have occurred under different cadence and averaging conditions. As can be seen, a move from monthly to weekly cadence while still using a trailing 30-day average would have only reduce the October 1-25 overminting from 33% to 21%, and moving to a daily cadence scarcely helps reduce the overminting more than this. On the other hand moving to a weekly cadence of update in conjunction with moving to a trailing 7-day average would have reduced the overminting to only 6% during this period - a more than three-fold reduction compared to only moving to a weekly cadence while still using a trailing 30-day average.


It is suggested that this proposal be considered and voted upon simultaneously with the ACCURATE proposal. Assuming that this proposal passes simultaneously with or combined with the ACCURATE proposal , the revised ACCURATE methodology would be as follows:

  • Each Monday RTTM is to be updated (unless a public holiday, where RTTM can be updated on the following business day). The 7 day moving average will be employed as the benchmark for relays.


Thanks to @Vitaly for his suggestion that existing deviations from target emissions be fixed prior to consideration of the SER proposal. Thanks to @cryptocorn for heeding this call and for the discussions leading to this and the ACCURATE proposals.

Thanks to @Caesar for his call for the move to governance structures that allow for “agility” in proposal decision making for matters needing little to no debate or refinement. It is the hopes of @cryptocorn and myself that the breaking down of changes to RTTM calculation methodology into individual actionable items will allow non-controversial items to be voted on and approved quickly while allowing more time to refine and debate more complex or controversial changes.

Dissenting Opinions


I think that like ACCURATE, this will be another small but useful improvement. Unless there is major dissent or a significant flaw found, I’d propose to combine the above into ACCURATE if it reaches proposal stage and allow for a single vote on the combined proposal.


Thanks @msa6867 and @Cryptocorn . I am unofficially monitoring ‘agility’ from ‘trigger to closure’ through this example. A ‘single vote on the combined proposal’ sounds more conducive to my objective.

Btw related questions- are there agreed upon SLAs in the ‘change management process’ (CMP) for changes of various complexities and urgencies? Also is there a documented CMP?


I agree with @Cryptocorn , this pre-proposal should be voted as a single proposal with ACCURATE if it is the wish of both proposers. The idea of changing the trailing average to a value that is best suited for the update period and traffic randomness (relay changes) is good.

Meanwhile I wanted to ask for more information. The provided spreadsheet does not show how the values were calculated. Also, it shows only a 30 day period, this means that there is a single point of data (one update of the 30 days case).

  • Can you provide the formulas that you are using? (the spreadsheet shows only values, the operations are hidden)

  • Is it possible to compare the deviations on more points? (there is at least one year of data available)

  • style comment, please use ISO date format.

I think that a series of curves showing the expected and total minting using different trailing averages (30-day, 7-days, 15-days, etc) over larger periods of time will provide more insight on what is the best value for the trailing averages. The curve with less deviation from the expected value should be used.

A comment that is more related to ACCURATE: The provided spreadsheet shows how using two different trailing averages affects the over-minting in monthly, weekly and daily updates. This analysis can be used to justify the selected update cadence, using the same reasoning as proposed above.


Both @cryptocorn and I are willing to combine. I think we were first testing the waters to see if both ideas had support so that one action item did not unnecessarily drag down the other.

I do not understand the quesiton. I only know the acronym SLA as “service level agreement” but I can’t find a way to make that fit the context of the question?

1 Like

The rows of the preceding 30 days are included… just unhide the rows

My bad!! I copied and pasted from excel to google sheets in order to make it easier to share, and I assumed it pasted the formulas, whereas it only pasted the values.


I can do more points

Using least square fit? Absolute value?


These are governance questions (just as agility) specifically around ‘change management’ and I needed to raise them while a change is being initiated. Maybe I should also raise them in the channels.

Please ignore them for now, indeed doesn’t fit into the actual change you guys are proposing.

1 Like

I was referring to the fact that you only tested one month, in the case of monthly updates this means that only one correction was applied. We don’t know if that month was particularly bad for monthly adjustments or particularly good for weekly updates. More months information is needed.

Thanks, now I can see the hidden cells also.
I see that what you are doing is measuring the overmint using the excess of relays directly.
You are averaging the deviations of the expected number of relays to the actual number of relays by day, to obtain the mean deviation of the whole period (one month). So, the values that you are presenting are the expected daily overmint in each of the different scenarios (1 - 7 - 30 days updates using 30 or 7 days averages). is this correct?

Since we are dealing with emission I think that the accumulated error would be enough. I think that the community is more interested in the excess (or deficiency) of minted POKT. In simple words, using X or Y method, which one resulted in a total minted POKT (inflation) closer to the objective?
Other metrics focus more on the shape of the inflation control which is not as important to the community (IMHO), which focus on mid to long term goals.

1 Like

Please see revised link above to a new Google Sheet. Importing new data into an existing Google Sheet proved problematic so I created a new one with a new link. The new sheet shows data for all of 2022 as well as the original Otober profile.

a 3x3 grid is created for [1-,7-, 30-day] averaging period x [daily, weekly, monthly] updates

For each, three different methods of deviation are shown: aggregate overminting over the entire period, average daily overminting over the same period (unweighted), L! daily deviation over the same period.

Conclusion remains the same: weekly update with trailing 7-day averaging is the best all around strategy. Daily updates with trailing 7-day averaging is even better but this is likely too taxing on PNF.

[From 2022 data, a strategy of first-of-the month updates using last-day-of-previous-month 1-day average would have resulting in very little aggregate overminting over 2022. It is not recommended to draw any conclusion from this as it is due to spikes in relays at month ends in several of the months. Whether this is simply and artifact of randomness or constitutes a real effect is unknown.]


Thanks, now it is a much clearer picture.
I added the total minting deviation over the year, calculated using the total number of relays observed and the total number of expected relays. The results are inline with your conclusions. I think that this metric is better suited to answer the general question:

how much deviation from target inflation is expected in a year using each strategy?

Using the provided data (and this modified sheet) the summary would be:

Total minting deviation 2022 Update frequency
daily weekly monthly
Averaging period 1 day 1.3% 6.4% 1.4%
7 day 2.3% 3.4% 7.7%
30 day 6.3% 7.5% 11.4%

I agree with the overall conclusion, 7 day averaging period seems best and faster updates are preferred. While daily updates provide lower over-minting, the improvement compared to weekly updates is not worth the work overhead that it requires.

edit: Corrected the aggregated periods, changed table to reflect new results

1 Like

Your described methodology is exactly what I refer to as the “aggregate” overmint or “weighted” overmint. My numbers reflect comparison of total actual relays Feb 1 through Dec 31 2022, to the total expected relays over the same period. This is the same as what you have done excepting that you count total actual relays from Jan 5 2022 through Jan 3 2023 while counting expected relays from (Feb 1 2022 through Jan 3 2023). The mismatch of counting actual relays but not expected relays from Jan 5-31 causes the discrepancy between your numbers and mine.

1 Like

I realize that you are right, both tables show the same under slightly different times. I found the error in the aggregated period for the observed relays and corrected it.

The use of “weighted” and “unweighted” in the provided document seems misleading as you are calculating total overmint fraction and average overmint fraction by day, respectively. These numbers have different meanings, the first is the observed overmint over an arbitrary time span (approx. a year), the second is expected overmint on a single day.

1 Like

I have eliminated the word “weighted” in the spreadsheet and replaced with “aggregate” instead

1 Like