Please note that aside from raw data, this article is purely speculative with the intention of provoking thought & conversation. This article is not financial advice, and does not venture beyond the scope of research. Please note that not all figures will be exact, as most math was done backwards based on current metrics.
TLDR:
- Nodes on the Internet Computer operate at a 100x+ inefficiency, with self imposed limitations of 300GB while having the capacity to store 30+ TB
-The Internet Computer burns .84% of what is rewarded to Node Providers on a monthly basis via Cycles Burnt
-Of the 36 Subnets, with a total storage capacity of 10.8 TiB, only 3.55 TiB is utilized, representing an inefficiency of 70%+
On September 11th 2023, I published the article âNode Provider Inflation Spiralâ within $ICP forums, of which, can be referenced below:
This conversation sparked great interest, earning acknowledgment from members of the DFINITY Foundation as a valid concern that should be addressed. Subsequently, this sparked the interest of multiple Node Providers, most notably, @DavidM & @dfisher ,
who voiced multiple valid points & concerns, from not only the perspective of a Node Provider, but from the perspective of an individual attempting to safeguard the Internet Computer Network.
With this being said, a perspective recurringly raised by vocal node providers, is the importance of not correlating a subnet & its nodesâ reward rate, to total subnet useage.
This concern was raised due to the fact that Node Providers can not pick the subnet theyâre in, and therefor are consequenced with the potential of contributing to an underused subnet - earning lesser rewards in contrast to the previous system.
This again, sparked my interest:
if Node Providers can not contribute to the network at scale without fear of contributing to an unused subnet, is the network over burdened with Nodes, or are Cycles not being burnt proportional to the true cost of running the network?
The first question to address is whether unused subnets should be compensated - taking us down a difficult road to navigate.
On one hand, if a Subnet is not being used, it is not contributing to the cycle burnt rate, and therefor only has the potential to increase inflation under the current reward scheme.
On the other hand, we have to consider why this inflation trade off is made. The Internet Computer is completely reliant on Subnets of Nodes to scale - if left without room to grow, there will come a time in which the network has to scramble to provide Nodes in time for dApps, which can take months, given Nodes are hosted in Data Centers.
As such, the line of determination in regards to what Nodes should and should not be compensated is cloudier than presumed.
However, something that the Internet Computer Network as a whole has agreed on, is that Node Providers providing degraded, or lesser services, are deserving of reward slashings. I believe a similar system could be translated to unused Nodes & Subnets, although thatâs not what weâre here to discuss.
From here, we can begin to determine the disparity in total Node Provider Reward Distributions & Cycles Burnt, in contrast to the network state overtime.
Referencing the ICP Burn Chart, it can be noted that the Internet Computer has a cumulative burn rate of 136822 ICP - of which, 136,215 ICP come from transaction burns (presumably burning conversion transactions), while 607 ICP comes from transaction fees.
Next, we can reference the Cycle Burn Chart for a more accurate gauge of the cumulative monthly burn of ICP over the last 3 months:
The cycle burn chart indicates that on average, 5.1B Cycles were burnt each second over the last 3 months, which equates to 440.64T Cycles burnt a day.
For simplicities sake, weâll convert this to an ICP amount before extrapolating to a monthly basis.
This can be done by first dividing the daily cycle burn rate by the SDR exchange rate to determine the daily fiat expenditure of the network.
440.64T Cycles / 1T = 440.64 SDR (âą 1.31 = 581.17 USD$)
This can then be extrapolated to an ICP amount by utilizing a 90 day average of the ICP token price, of 3.5$.
581.17 USD$ / 3.5$ = 166.0485 ICP
Now that we have determined the network burns 166 ICP / day across a 3 month average, we can extrapolate this to estimate a monthly network burn of roughly 4957.45 ICP.
In contrast, the previous article âNode Provider Inflation Spiralâ depicted that the Internet Computer Protocol minted 500k+ (on the lower end) ICP each month, during this timeframe, reaching an all time high distribution of nearly ~600k ICP tokens last month.
Utilizing Augusts Node Provider Reward Distribution data, we can determine that of the 583,577 ICP minted to compensate Node Providers, only .84% of it was correspondingly burnt via Cycles, showcasing a disparity of 99.16% between Node Provider payouts & ICP burnt via cycles.
Turning our attention back to Network State, it can be observed within the following article that Gen1 Node Machines have a storage capacity of 28.8 TB & 30.72 TB
Alternatively, Gen2 Node Machines have a storage capacity of 32 TB.
In contrast to their abundant storage potential, according to public documentation & official forums, subnets are seemingly limited to a capacity of 300GB.
With this information laid out, we can continue to the Network State:
As of present, the current Network State is 3.55 TiB of a total 10.8 TiB, across 36 subnets - which equates to roughly .098 TiB per Subnet (the math is not so simple, but this is useful to depict network load), or roughly 30% of each Subnets maximum capacity in correspondence to current self imposed limitations.
This goes to show that the issue is not so simple as âare we over paying or over onboarding Node Providers?â - itâs a combination of both, with Cycles burning less than 1% of what Node Providers are compensated monthly, while Nodes are operating at seemingly a 70% inefficiency.
Which raises the question:
Is over compensating Node Providers while over on-boarding worth the burden itâs brought upon the network?