Forgive my ignorance but I just don’t know where to find this info a quick search of this forum yields nothing about node compensation.
AFAIK nodes are paid new icp that’s created based on the exchange rate their compensation is some sort of $ or sdr cost. Please correct me if I am mistaken.
Now that we are entering a true crypto winter bear market and Icp prices Keeps dropping like a bag of ….
At what price point does the inflation caused by the daily nodes compensation bill start to become noticeable?
What is the the compensation at now? Is it 1% 5% 10%?
At what price point does it start to hit 20% 50% 100% inflation?
Are we able to shut down nodes to save the IC from encountering an inflationary death spiral? Or is there already some mechanism to address what could become a real threat in the next 12-18 months of crypto winter?
Recent node provider rewards have been around 100,000 ICP per month in total. As you write, that monthly amount in ICP will increase while the ICP price is dropping, but at the moment we’re talking about something like 0.2% inflation per year from those rewards, compared to some 8% inflation caused by voting rewards – which will decrease to about 5% in a few years. (Note that there are also deflationary measures like burning ICP for computation or ledger transfers, but admittedly they’re on a much lower scale for now.) I have not done any model computation for when the inflation for node provider rewards leads to a death spiral, but given the quick estimates above I think it is more of a theoretical problem than an actual threat.
Assuming (rough order of magnitude) 500 nodes currently, that’s $4000 USD per month per node.
Assuming we see a 50% ($10 per ICP) price reduction (currently at $12.75) from $20, that would translate in doling out 200000 ICP.
(A) if the price does dip to $5 per ICP, we would be doling out 400000 ICP to node providers. Further, if we increase the number of nodes to 1300 (as is planned) by EOY and price stays at $5, we would be pushing 1M ICP to node providers.
Since node providers HAVE to provide for their opex, they WILL BE LIQUIDATING 1M ICPs. This is a death spiral if historical data is any indication. We just came of out Zone-1 of @Kyle_Langham zones and we may go into it again.
The two mitigating future proposals that will help are:
(A) lowering the rewards being provided per node. $4000 per month is very generous imo.
(B) stop the additional nodes being on boarded right now. There is ZERO evidence that we are using even 10% of compute capabilities of installed capacity (we are NOT burning 10000 ICPs for compute needs per month).
Does anyone know how much the per-month operating expenses are for a node provider on average? I wonder how much “profit” they are taking from the current ICP remuneration scheme.
In general, I do worry about the NNS minting far more ICP than it burns (at least for now). It appears that we will increase the number of IC nodes from ~500 to ~1300 by the end of the year. Is that because we expect more cycle demand throughout the year, or simply because that was the plan from the start? Does it make sense to instead tie node onboarding (or even voting rewards) with the number of cycles burned?
Even though node rewards are small relative to voting rewards, there seems to be a material difference in their effect on selling pressure: as @mparikh pointed out, node providers must cover their operating expenses and it’s likely at least some of them sell ICP to do so. Thus, it might be sensible to rethink how we set those node rewards.
On the plan to 1300 nodes from 500 nodes. I am , bluntly speaking, AGHAST because no appreciation has been given, seemingly, to the load predictions.
A doubling of capacity or more WITH ZERO EVIDENCE of increased compute capabilities needs (followed with detailed calculations) would be LAUGHED OUT at the manager level ; leave besides getting to the executive level; in a normal setting.
This seems like a very expensive science project.
The team REALLY needs to justify the need to the community to MORE than double the number of nodes.
Typically when you buy an expensive server, you take out a loan which causes you to pay interest on the loan.
So my $30000 server amortized over 3 years =~ 833 per month.
Now the interest on that amortized amount = 0.1×833 = 83
bumping the loan amount to = $916 / month (approx $1000)
This has nothing to do with rewards. Merely the cost of acquiring hardware & repaying the loan on a monthly basis. Hth
Edit: this is exactly why all profitable node providers MUST IMMEDIATELY liquidate their tokens(ICPs or otherwise)…otherwise they do not have a viable business.
(A) lowering the rewards being provided per node. $4000 per month is very generous imo.
Actually, based on this I think that estimate may be a bit high.
It seems that monthly USD rewards per node range from $1169 to $3248 depending on where you operate the node. Your monthly operating expenses of $1500 may be too little, too much, or just enough… depending on the region of the world you run the node in.
However recall that IC node providers have little wiggle room on the type of hardware they source. So their opex cost of acquiring a server remains within a narrow range.
Because virtually all of the nodes are hosted in the “western” world, the expense of hosting it would also not vary drastically.
This is not consistent with 100000 ICP being provided at a estimated $20 ICP to all (500) nodes. I.e. 2m/ 500 = $4000…unless i am missing something.
Edit: Unless we are paying the 800 IDL nodes ;which would be even more crazy. Because, in that case , we would be paying for capacity that we already had but not deployed
Hmm, I’m getting roughly the same numbers as you. (Actually, it’s a little lower since the price of ICP was closer to $17 with the most recent April node provider payments, I believe.)
The IC is already rewarding (most of) the 1’300 nodes we plan to add until EOY, actually all except for DFINITY’s that are not yet deployed. So the rewards in XDR will actually increase only slightly until EOY. (The reason for this is bootstrapping: the IC needed a guaranteed supply of nodes in the beginning; the nodes all exist, they just haven’t been onboarded.)
The rewards are rather generous at the moment, which was necessary initially because the NPs were taking significant risk by investing in a project before launch. (They’re not outrageously generous, they just factor in the significant risk of investment.) So I expect lower rewards for future generations of nodes, where risk is considerably lower.
This is the background for the numbers I gave above, and for me not being worried about this at the moment, despite the fact that theoretically this spiral exists.
(i) If we had already purchased the capabilities of 1300 nodes, why didn’t we deploy all 1300 nodes? Seems a waste of tremendous money ; given that we provided generous rewards for nodes to essentially do nothing for a year. Specifically what does it mean for "the nodes to exist "? That they are simply racked in the data center, not powered on?
(ii) Now that we are onboarding the legacy 800 nodes, i am assuming that they will NOT be conforming to AMD-Milan. So when we say "[quote=“Luis, post:68, topic:9170”]
to ensure that the network is growing with nodes supporting SEV-SNP attestation.
[/quote] ", are we to presume that that growth will be in addition to 1300 nodes?
(iii)For how long are we liable for the cost of first generation nodes? These are the MOST expensive (counter to Moore’s law). Is there a plan to phase them out? Given Moore’s law, we should be replacing these nodes with next generation nodes.
The first thing we need is clarity about the figures.
Official clarification of the node rewards including QoS adjustments in a clear way.
Dashboard and spreadsheet showing actual node rewards per day on a per subnet and per node basis so they can be analysed.
Dashboard and spreadsheet showing actual cycle and ICP burn per day rather than aggregated. This should be correlated with usage statistics.
The next thing we need is a model burn vs usage as bounded by subnet capacity
That is at what level of usage does cycle burn exceed node provider rewards for a single subnet? (Model of different typical applications)
Is it feasible for a subnet to become net profitable before performance degrades.
then we should adjust economic parameters and manage capacity to try to ensure that subnets are “net profitable” most of the time By this I mean cycles burned denominated in SDR > node rewards denominated in SDR.
(1) adjust pricing
Change pricing of computation, messages and storage and so on in order to ensure that subnets can be net profitable before they are at capacity, while still offering good value to developers. (This can be done in conjunction with a general review of pricing aimed at preventing spam and DoS attacks. It could also be non linear with prices increasing as a subnet becomes congested)
(2) Have a rational approach to managing capacity increases and decreases
New subnets should be added when they are needed to cope with usage increases and spikes. However we should ensure that existing subnets are mostly “net profitable” and that burn exceeds node rewards on average.
Consider ‘fee market’ type approaches to managing short term peaks in demand. That is it is better for costs to increase during a short term spike than for a subnet to stop functioning altogether.
(2) Consider adjusting how nodes are paid
Consider variablize costs somewhat by aligning rewards more closely to usage and QoS performance.
Consider making payments continuous based on an automatic in-protocol method so they use the same exchange rate as cycles rather than as a lump sum proposal based on average ICP price.
Consider adjusting overall incentives.
Note: the dependancy between (1) & (2). Subnets should be profitable in normal circumstances and certainly well before that threshold for spinning up a new subnet is reached. However onboarding nodes takes time so the threshold at which subnets are added should be chosen so that there is sufficient time to onboard more nodes and spin up a new subnet. We should therefore consider ways in which we can shorten the lead time and make it easier for nodes to be “on stand by” as well as ways of forecasting future usage.
Finally we should come up with a credible plan for the network to become deflationary “ultrasound money”. That is we should develop a model of when usage and subnet numbers will rise sufficiently for the network as a whole to become deflationary with and without taking NNS rewards into account, and a have a strategy for achieving this. This may possibly include reducing NNS rewards but it would likely be hard to get that past the NNS even though it would be in better from a tax point of view as capital gains are typically taxed less than income.