Node Provider Inflation Spiral

These are very important concerns that need to be carefully considered. We haven’t heard a lot of comments along these lines, but I’m sure a lot of us are thinking it. Thanks for raising them in this discussion.

I am in favor of a short-term cap to address an existential threat. Discussion is good in forum, and perhaps is best served by keeping options simple and direct, it terms of problem/solution.

And while there is wisdom in allowing the ICP community to take the lead in proposals, even with multiple threads so as to keep each proposers options distinct, it may also be a good idea for Dfinity to soon draft their own series of options and present them.

1 Like

Cross posting due to relevance:

TLDR: We burn .84% of what we mint, while utilizing 3.55 TiB of 10.8 TiB (while one subnet has the storage potential of 30+ TiB - meaning they’re completely inefficient & unoptimized)

But do we want to be near max storage capacity? I can see a possible downside to being at max storage utilization. Such as increased performance degradation. This could have been set low to provide longer life to the servers. At launch this could have been established so initial servers were able to meet demand of the network while waiting the months it takes to onboard new nodes. Thus ensuring network stability in the beginning phases through utilizing more reliable servers with less load. Not a node provider so if someone could confirm if I am on track here that would be great.

1 Like

You are correct, it takes 5 - 6 months to onboard a Node Provider. Basically it takes ICP half a year to scale if needed. It can be a single server or it can be 10. Due to shortage, 2 months is required for vendors to build the custom spec server and deliver it. so @Yeenoghu you are correct, we should never be at a capacity.

2 Likes

@Accumulating.icp There is a lot of interesting conversation going on this thread. It is crucial to understand the NP onboarding process. Since early this year Dffinity planned to work on true decentralization and moved to the Eastern part of the world. With that new NPs were onboarded. It takes around 4 - 5 months for the process to complete. This means half that 4 to 5 months find and ordering vendors to build custom server hardware. This is if the chip shortage remains the same and doesn’t get worse. This is said well in this comment

So basically it takes half a year for Dffinity to scale if there is a surge of usage. But I totally agree we should not go beyond what we can chew or be at capacity

1 Like

I don’t think it’s exactly optimal for us to be “near” maximum storage capacity although when you consider the multitude of inefficiencies, a compounding effect begins to occur.

The issue starts with how nodes are optimized - they’re currently limited to 300GB of ~30TB, however since raising the topic, I’ve heard 700GB is in discussion, although I’m still exploring this further. This translates to 1/100th of what these nodes have the potential for, although I understand another small fraction of storage is utilized for programs to run the nodes , replication of chain state, etc.

This 100x inefficiency is then compounded by the fact that we’re only utilizing 32.8% (3.55 TiB of 10.8 TiB) of the self imposed limitation of network state, while continuously on-boarding new nodes.

I recognize it takes time to get these nodes operational, but there are 660 Nodes waiting to be added to subnets at this second. That means we have a ~70% inefficiency while more Nodes are waiting to be added to subnets than the entire Network itself currently has.

These inefficiencies are then compounded into the burn rate to node provider reward ratio: we’re burning ~5,000 ICP a month via cycles, while we mint ~600,000 ICP a month to compensate node providers. Which translates to ~.8% burnt of what is compensated to Node Providers.

6 Likes

I agree that that at this point in time the inefficiency is there. Compounded by decentralization efforts. But looking into the future isn’t this scalability without the need to onboard new providers once our goal of decentralizing NP’s is met? Or even buffer for utilization surges to prevent increased degradation to the servers when nodes cant be onboarded fast enough?

This might be off topic. Can we scale down? Scaling down is a big part of scalable.

3 Likes

Good point, first we need to understand a few terminologies “Cloud” and “Baremetal”. ICP has been set up to use bare-metal. As you have seen google allows apps to scale up and down saving “cost” but is it really ? Cloud environments are very complex on many layers and for Dffinity to achieve that it would be a long term goal from what I know. Looking at Google’s pricing structure if you configure similarly to Gen2 on the google cloud it takes 12,000USD plus a month to pay. This is almost 4x higher than the highest NP getting paid based on location(Not considering Opex). So autoscaling comes with cost which is high per CPU and RAM. If we are heading in that direction a big architectural change would require on software and hardware layer. It’s up to Dffinity. Then again considering why using Baremetal direction pros. Cost saving, less management cost, and easier to debug issues. Cons very hard to scalable up or down

2 Likes

Currently we are paying the node providers more for computation than is actually used.

Since node providers can not ramp up or down quickly, it appears this cost is mostly fixed in the medium term.

Instead we could consider lowering the cost of computation (cycles) dynamically until the computation budget is used up. This should attract developers since storage/computation would become dramatically cheaper than AWS and alternatives.

Tldr;

We are paying a fixed cost for node providers anyway. Why not use this to lower prices for developers instead of the profits going to data centers and hardware manufacturers without being used. Let the market decide.

1 Like

When they designed the tokenomics they thought there would be users on this chain to burn ICP in contrast to the inflation. 2 years after launch its clear there are almost zero users on the chain to burn the inflationary supply. Therefore the tokenomics are wrong for this project and must be immediately changed.

Besides that you just provided a blueprint for a malicious actor to nuke this project intentionally. I hope that you are aware of that.

you would think that there would be a dynamic pricing mechanism but apparently there isn’t… lol except maybe ICP prices dumping

DW it doesn’t take a genius level IQ to figure out how to nuke this project. It’s already well on its way to happening…

Hi Kyle, your work on ICP has been great so far! What do you think of the fact that Dfinity has around 60 million liquid tokens which could be sold at any time? Say for example the market cap returns to $10billion, this would equate to over $1billion of liquid tokens. Does the foundation really need this much money to run? Have they considered burns to increase trust and decentralisation?

1 Like

Just want to say my named neuron is voting ‘no’ on all node-related proposals until some solution is proposed and passed. My votes and the votes of my followers will make no difference, but I encourage any and all neurons who believe a solution is needed to vote ‘no’ with me to add pressure.

Thank you.

2 Likes

Completely agreed.

For more info on why this is important, please refer to the following article:

We can’t just say no to people who have invested and we can’t blindly add nodes in the same locations over and over. Suggestion to encourage the ordering of hardware should be done after the 10th step.

https://wiki.internetcomputer.org/wiki/Node_Provider_Onboarding

“Node operator record”.Which defines the number of nodes and datacenter. This can be something the community can control if there is a reason to add nodes or not.

1 Like

I too understand your frustration, it would be wise to find a mutual ground on saving both NP and investors on ICP. There is no point in adding more nodes in the same country or location that has already nodes or upcoming nodes. There can be a waiting list for such a process. On the second note it would be unethical to reject NPs who have shipped hardware. So let’s understand deeper on the NP onboarding process. If the community has given an allocation for nodes. This can be seen in step 10 of the following link

https://wiki.internetcomputer.org/wiki/Node_Provider_Onboarding#10._Create_a_node_operator_record

This step defines the location and number of nodes. If the proposal is accepted usually NP will order hardware which will be very costly. My suggestion is there is where the community can decide if such a number of nodes is needed in that location. Dffinity has clearly explained the process in milestones here too.
https://wiki.internetcomputer.org/wiki/Node_Provider_Roadmap

So don’t you think this is what we should do in the short term. We should be informed, smart and strategic.

3 Likes

If Dfinity sees a lot of votes to reject node hardware it’ll light a fire under their @ss. They’ll do something before a node is rejected. If they don’t, it’s kind of like going on strike. We all suffer but hopefully we can come to an agreement.

2 Likes