It’s true that contraction of supply does not simply increase price, and that expansion of the supply does not alone decrease price. As an obvious example, the federal reserve is presently extracting US Dollars from the global economy but there is still inflation in terms of USD.
But if the federal reserve were still expanding the supply of money we can be sure inflation would be higher. There is definitely a relation between change in supply and inflation.
Anyways just a thought. I appreciate this conversation taking place and that Dfinity is listening.
If we are creating this inflation spiral by onboarding new NP for decentralization then it seems as we assign new nodes to subnets to increase decentralization in a subnet the nodes that are removed from a subnet that CANNOT be allocated to decentralize another subnet should get removed completely. This would help mitigate inflationary pressure from onboarding to decentralize.
As an early observer and participant, I’ve witnessed ICP’s growth and its journey in fostering decentralization. Last year, there was a noticeable lack of NP, especially from non-Western regions. A true sense of decentralization seemed elusive, with the majority of NPs originating from the Western world, predominantly the USA. Dfinity’s decision to offer rewards based on geographic location was a commendable one. This made many eastern parts of the world invest in ICP for nodes. The cost usually starts around 100,000 USD and above. Opex from the Eastern region for connectivity is significantly high and comes with long term contracts, thus NP’s have who have already deployed and are deploying have signed up for long term contracts with connectivity partners. Thus my suggestion is to have a good strategies on a short term solution with a balanced solution.
Suggestion #2
From the perspective of Node provider, the current hardware and data center requirements set by ICP are spot-on. They shouldn’t be altered. Lowering the bar, say by allowing any standard computer to participate, risks compromising system uptime and reliability. This is a key reason why many decentralized computing platforms haven’t sustained. ICP stands tall, but we must remember that it’s in competition with industry giants like Amazon, Google, and Microsoft in the cloud sector. ICP’s decision to anchor in Tier 3 data centers is a strategic masterstroke. Using non-enterprise hardware should be avoided to maintain the platform’s integrity.
Suggestion #3
Drawing from my experience as a software service provider, I’d urge Dfinity to increase its marketing strategies. There’s a ripe opportunity to reach out to SaaS companies(Software companies that build software for clients as a service), encouraging them to develop and deploy on ICP. While ICP offers brilliantly simple tools for deploying static sites, Vue, HTML, Angular, and more, there’s still a considerable gap when we compare deployments on platforms like Firebase to ICP.
This is a good point that is important to remember. Diversity of node providers helps advance decentralization and there was a big push to on board node providers from under represented areas of the world.
This is a common request by the community. I think it’s perfectly valid for the community to want DFINITY to increase marketing strategies. I also wonder if DFINITY is constrained in ways that we don’t often recognize as a community. For example, DFINITY is a non-profit organization that stood up a crypto token that many people want to believe is a security or could be designated as a security. If they apply significant resources for marketing and promotions, especially if it moves ICP price, then I wonder if it will have a negative impact on their non-profit status or government rulings on the security status. It seems to me that we need other organizations to fulfill a role for marketing of ICP. I’d like to see the community support multiple large and professional organizations that can fill this role.
This is a great thread and it has raised a huge number of interrelated issues. I’ve been thinking about this a ton over the past week and reading through all of the responses. I have also gone back to look at currency hyperinflation - what causes it and what happens.
My reaction after having thought about it is that there are a ton of things to solve but the issue that was raised was hyperinflation and the idea of an existential risk caused by a collapse in token price. The answer to that problem is to cap the inflation rate/defer and break the dependence on price and the idea of a spiral. We actually don’t know what level causes a runaway problem but looking at other examples, something like 12% per annum and 1% per month as a guess for illustrative purposes might be reasonable.
I haven’t checked Tromix’s numbers (on post number 5, something like a $1.25 per ICP) is where the problem occurs. So at that point, monthly NP rewards would be capped/deferred. If ICP keeps dropping beyond that level, at some point maturity might take a portion of the impact.
One element that has not been extensively discussed on the thread is the issue of trust and community confidence. Dfinity/ICP has to be one of the most ethical and trustworthy networks out there and with the best tech/technical talent. Yet the network has been under siege from the beginning from others - it hurt the launch a lot (FTX, lawfare attacks, manipulators). I am not sure that a lot of the actions that have been discussed will actually reduce sell-side pressure and that to the contrary, they could dramatically accelerate it. We keep ignoring the fact that there are something like 250 million unlocked tokens and if you diminish trust in the network and people wash their hands of ICP that will hurt the network more than the inflation issue we are discussing. So things like just locking maturity, and canceling rewards need a lot more thought and really aren’t part of the issue that this initial thread was launched to discuss.
I think most stakeholders and fans of the IC can get their head around trying to take the idea of an accelerating downward spiral off the table (who wants that other than people that hate the IC?). So, I would propose that we focus on an inflation cap.
These are very important concerns that need to be carefully considered. We haven’t heard a lot of comments along these lines, but I’m sure a lot of us are thinking it. Thanks for raising them in this discussion.
I am in favor of a short-term cap to address an existential threat. Discussion is good in forum, and perhaps is best served by keeping options simple and direct, it terms of problem/solution.
And while there is wisdom in allowing the ICP community to take the lead in proposals, even with multiple threads so as to keep each proposers options distinct, it may also be a good idea for Dfinity to soon draft their own series of options and present them.
TLDR: We burn .84% of what we mint, while utilizing 3.55 TiB of 10.8 TiB (while one subnet has the storage potential of 30+ TiB - meaning they’re completely inefficient & unoptimized)
But do we want to be near max storage capacity? I can see a possible downside to being at max storage utilization. Such as increased performance degradation. This could have been set low to provide longer life to the servers. At launch this could have been established so initial servers were able to meet demand of the network while waiting the months it takes to onboard new nodes. Thus ensuring network stability in the beginning phases through utilizing more reliable servers with less load. Not a node provider so if someone could confirm if I am on track here that would be great.
You are correct, it takes 5 - 6 months to onboard a Node Provider. Basically it takes ICP half a year to scale if needed. It can be a single server or it can be 10. Due to shortage, 2 months is required for vendors to build the custom spec server and deliver it. so @Yeenoghu you are correct, we should never be at a capacity.
@Accumulating.icp There is a lot of interesting conversation going on this thread. It is crucial to understand the NP onboarding process. Since early this year Dffinity planned to work on true decentralization and moved to the Eastern part of the world. With that new NPs were onboarded. It takes around 4 - 5 months for the process to complete. This means half that 4 to 5 months find and ordering vendors to build custom server hardware. This is if the chip shortage remains the same and doesn’t get worse. This is said well in this comment
So basically it takes half a year for Dffinity to scale if there is a surge of usage. But I totally agree we should not go beyond what we can chew or be at capacity
I don’t think it’s exactly optimal for us to be “near” maximum storage capacity although when you consider the multitude of inefficiencies, a compounding effect begins to occur.
The issue starts with how nodes are optimized - they’re currently limited to 300GB of ~30TB, however since raising the topic, I’ve heard 700GB is in discussion, although I’m still exploring this further. This translates to 1/100th of what these nodes have the potential for, although I understand another small fraction of storage is utilized for programs to run the nodes , replication of chain state, etc.
This 100x inefficiency is then compounded by the fact that we’re only utilizing 32.8% (3.55 TiB of 10.8 TiB) of the self imposed limitation of network state, while continuously on-boarding new nodes.
I recognize it takes time to get these nodes operational, but there are 660 Nodes waiting to be added to subnets at this second. That means we have a ~70% inefficiency while more Nodes are waiting to be added to subnets than the entire Network itself currently has.
These inefficiencies are then compounded into the burn rate to node provider reward ratio: we’re burning ~5,000 ICP a month via cycles, while we mint ~600,000 ICP a month to compensate node providers. Which translates to ~.8% burnt of what is compensated to Node Providers.
I agree that that at this point in time the inefficiency is there. Compounded by decentralization efforts. But looking into the future isn’t this scalability without the need to onboard new providers once our goal of decentralizing NP’s is met? Or even buffer for utilization surges to prevent increased degradation to the servers when nodes cant be onboarded fast enough?
Good point, first we need to understand a few terminologies “Cloud” and “Baremetal”. ICP has been set up to use bare-metal. As you have seen google allows apps to scale up and down saving “cost” but is it really ? Cloud environments are very complex on many layers and for Dffinity to achieve that it would be a long term goal from what I know. Looking at Google’s pricing structure if you configure similarly to Gen2 on the google cloud it takes 12,000USD plus a month to pay. This is almost 4x higher than the highest NP getting paid based on location(Not considering Opex). So autoscaling comes with cost which is high per CPU and RAM. If we are heading in that direction a big architectural change would require on software and hardware layer. It’s up to Dffinity. Then again considering why using Baremetal direction pros. Cost saving, less management cost, and easier to debug issues. Cons very hard to scalable up or down
Currently we are paying the node providers more for computation than is actually used.
Since node providers can not ramp up or down quickly, it appears this cost is mostly fixed in the medium term.
Instead we could consider lowering the cost of computation (cycles) dynamically until the computation budget is used up. This should attract developers since storage/computation would become dramatically cheaper than AWS and alternatives.
Tldr;
We are paying a fixed cost for node providers anyway. Why not use this to lower prices for developers instead of the profits going to data centers and hardware manufacturers without being used. Let the market decide.
When they designed the tokenomics they thought there would be users on this chain to burn ICP in contrast to the inflation. 2 years after launch its clear there are almost zero users on the chain to burn the inflationary supply. Therefore the tokenomics are wrong for this project and must be immediately changed.
Besides that you just provided a blueprint for a malicious actor to nuke this project intentionally. I hope that you are aware of that.