Question regarding RE EXC-1168: add non-subsidised storage cost on 20+ node subnets (behind the flag)

Can someone provide some insight on the fair_gib_storage_per_second_fee?

Will subnets supporting the gib_storage_per_second_fee be phased out?

This is a 400x increase. Without context this is a bit anxiety inducing…

11 Likes

This would be a significant shift and disrupt a good bit of our go-forward plans. It would be great to hear something from DFINITY on what is going on here.

2 Likes

Hi folks,

Apologies for delay, I do not have all the facts yet (nor is this part of the stack one I am familiar with), but we did ask when this thread came up.

Here is what I currently know and most relevant:

  1. This proposed change would only affect the future to-be-created large app subnet (more nodes per subnet).
  2. All existing subnets would continue to have the 5USD/GB/Year cost.
  3. Future subnets which specialize on storage (e.g. low replication subnets) will have low costs.

I hope to get more information soon, but I wanted to be upfront about what I know so far.

11 Likes

Wow, great catch. This is a massive increase. Not sure why an increase in subnet size from 13 to 20 would trigger a 400x cycle cost increase.

Also, the term “subsidized storage” is highly concerning. A subsidy can be taken away. I thought the IC’s $5 / GB / year cost was not a result of any temporary “subsidies” but rather a reflection of the true cost of hardware. If that’s not the case, then the IC will have major issues with storage moving forward. Even $5 is too expensive to storage images and videos. Now, this change proposes an increase from $5 to ~$2,200 for a per-year GB.

The direction needs to be finding ways to lower the current $5 cost, as suggested by the storage subnet proposal a while back. This goes in the opposite direction.

8 Likes

Thank you for your reply Diego.

1 Like

Follow up here this AM.

First, Diego, I appreciate how attentive you are to the forums. Thank you :slight_smile:! None of the following is directed at you or any individual for that matter.

Regarding the large app subnets. My understanding is that more nodes => more inherent security => slower consensus cycle. A proportional increase in base storage cost is understandable. However, as a developer what I need first is more cheap storage, quicker censuses, and more throughput. I’m sure many other developers feel similarly. I sure as hell know web2 users won’t settle for less. Decentralization is meaningless if it’s not useable and accessible.

I love the IC. I believe in the vision and team. And, I intend to see our project through to completion. But, I have to be realistic here… Long term, If $2200 a GB is where we are heading; I can not in good faith continue to say that the IC is going to rebuild any meaningful amount of the internet. Will it be great for DeFi and NFTs? Probably! But, if we can’t build real web scale projects with these things whats the point?

350GiB a subnet isn’t nearly enough and not at these outrageous prices. I’ll go as far to say I believe the viability of the network rests on solving these problems. Cheap data storage is a well solved problem. Cheap network accessible data storage is a well solved problem. Cheap network accessible redundant data storage is a well solved problem. I am very confident you all can solve these problems. So, whats going on?

I have a lot more to say here. But, I think this is enough given how much I actually know at this point in time. :pray: :blue_heart:

12 Likes

If I had to guess, I would say the $5/GB/year is the cost of storage after custom storage subnets come out. Meaning, for example, 4 node specialized storage subnets would have the $5/GB/year pricing.

If this is true, it means that storage costs right now are indeed subsidized, as 13 node subnets will have higher costs (could even be 3x now or more).

If that is true, then the new proposal simply removes the storage subsidization on subnets with more than 20 nodes. This means all existing application subnets will be unaffected (except NNS, SNS maybe, or others).

All in all, this feels very normal to me (assuming what I am talking about is true). Makes sense to subsidize costs initially as costs will go down with custom storage subnets, effectively removing the storage subsidization, and makes sense to remove the subsidy on large subnets as it would be unsustainable to store anything on those subnets, so accurate costs should be used.

So I’m not too concerned about anything here. But I would love clarification on whether the way I am interpreting events is correct or not.

3 Likes

I really want to share your sentiment. But, Foundation has been incredibly vocal about the $5.00 GiB/yr number. Moreover, when the mainnet first launched I was given reassurance that the original non-verified subnet prices were purely to protect the network against nefarious actors. Suddenly, and quietly, backtracking on this is a really big deal. Frankly this makes us look really bad when you consider how much shade we’ve thrown other networks.

I want to entertain that the price point reflects given storage capabilities for N nodes. But, I have a hard time believing it costs $25,666 a year to run a node (350GiB * $2200yr / 30 Nodes). That feels like a hell of a profit. If that is the case then we need to have a serious conversation about hardware capabilities and if any of this is even feasible.

3 Likes

I would love to get basis of 400x cost…$2200 / GB.

Specifically:

  1. What is the assumed cost of raw storage?
  2. What is the overhead multplicative factor of replication ? (Remembering that the storage is replicated against multiple nodes & have to migration cost over years)
  3. How is the amortization, depreciation and loss calculated?

This should be public knowledge; in the decentralized context.

1 Like

This.

It would be good to get the framework so that we can do our own calculations. If I want a replication of 4 for media and I’m fine with a data center in 1xNA, 1xEurope, 1xAsia, 1x? then I’d like to do some envelope math to figure out what that might cost in the future.

I’m guessing that since this is behind a flag that it was put there for some smoke testing and hopefully it is just a number pulled out of the air. I can’t imagine 13x replication is that much. S3 x 13 copies x 1 year is like $3.58 which is competitive with the quoted $5. The auto replication, ability for a smart contract to compute over the content, and censorship resistance is more than worth the extra buck fifty. If we’re going to go to $2,000 for a 34 Node network(I’m guessing maybe this was targeted at ecdsa subnets that maybe have no real storage need?)…well…that just doesn’t add up.

All in all…I’d rather not speculate…but I guess that is what I’m doing until we get some insight.

TLDR: I asked R&D to clarify. I had a quick call with some folks in Zurich (their Friday night) and they said they would provide better context. I did not want to leave folks hanging for too long on this thread so I will share what I currently know. I expect more info in coming days and I will keep checking out this thread.

Fwiw, I totally feel that in this thread and in many others. Thank you.

My current understanding is that R&D’s current thesis is that subnets NOT be homogeneous but have different focuses: some are smaller and faster, others deliberately very large. So in this case your intent and needs are very much heard.

Where I think your intent (and I say “your” as in the broader community), I think we at DFINITY did not do a proper job communicating the proposal or intent here. This i fair criticism. For example, I know the code in question does NOT change any prices for folks, but I do not know when these theoretical super-large subnets with high prices would come online.

I will be honest: I saw @Hazel 's tweet asking for more information and design intent and I think that is very reasonable stance. I would think the same.

Since the code in question is for future and yet-to-be-created subnets, it would be easy to dismiss the change as an innocuous NOOP code that does nothing, but I do think folks need more info. It is fair for folks to ask DFINITY. My only regret is that I do not know much and I only asked the relevant folk on their Friday evening.

That being said, here is what I know and do not know:

  1. Exiting subnets remain the same… and they have plenty of capacity for current and new developers.

  2. This change would be for super large subnets (none of which exist today). These are in the works. So one could see the code in question as a hint… but not actually changing anything.

  3. There are also plans for fast, cheap subnets (none of which exist today) which would optimize for costs and speed.

  4. I do not know the rationale for 400x but it has something to do with the economics of super large subnets.

  5. I suspect part of the lack of communication was because things were not quite baked yet… so I believe it was premature to add that code that would merely “hint”

12 Likes

Diego conveys at the top leaders that to be successful in the community and in life it is important to do things well, honesty and well-managed transparency is increasingly essential for the support and credibility of something great. Not only do we have to change the internet, but we have to change the way things are done. you are a great example of someone honest, transparent and trustworthy and that for me is the great value that Dfinity has to be and transmit. Then when we get there we will moon :rocket:

5 Likes

Maybe this is a good learning experience for not pushing out new NNS code change proposals on a Friday afternoon if this can be at all avoided, (i.e. non-emergency changes). These proposals get executed quickly and automatically anyways once the DFINITY neuron casts its vote, so the on-call engineers will probably also thank you for pushing these out early in the week.

As a side note: It would be really nice as a developer to be able to know that each week Monday-Tuesday these replica/subnet change proposals will be submitted to the NNS, so that I can schedule all of my reviewing of IC protocol changes on a Monday or Tuesday instead of sparsely throughout the week.

4 Likes

Dear community

We would like to respond to the discussion in this forum topic that has resulted from a recent code commit by DFINITY regarding pricing of the upcoming high-replication-factor application subnets.

First of all, let us revisit how the 5-USD-per-GB-per-year price was determined originally: This was done pre-genesis, based on an assessment of hardware costs and protocol capabilities some years in the future and assuming lower-replication factor subnets than 13-node subnets. Originally, we launched with a replication factor of 7, which was then increased to 13 for application subnets, without changing the pricing. Now, more than a year after launch and successful operation of the Internet Computer, we have a much better handle on the cost model and requirements, thus it is an appropriate time to revisit pricing.

As we are planning to introduce higher-replication subnets than our current 13-node subnets in the near future, we were forced to think about pricing of the different resources on this new type of subnet and had started a discussion on pricing internally to DFINITY. Unfortunately, parts of this discussion, specifically an extreme-case, were unintentionally exposed through a code commit related to resource pricing of high replication subnets.

It is self-evident that proceeding like this was a mistake from our side. The community has a major stake in the conversation around pricing. We apologize for this and have learnt a lesson for the future.

We suggest opening up the conversation on resource pricing to the community. To help with this, DFINITY will, as a next step, provide the relevant detailed information and an initial proposal to help frame the discussion. Based on this, you, the community, will be able to contribute and help shape the future of resource pricing for the IC.

To help reassure the community in the meantime, here’s a back of the envelope example of how memory pricing can remain in the $5 range going forward. Let’s look at two scenarios, based on today’s hardware costs and some assumptions as shown in the table below. Note that the Internet Computer Protocol requires enterprise-grade SSDs for their write endurance and sustained write speed, the cheapest consumer SSDs will not work (for long).

Scenario 1 Scenario 2
Enterprise SSD HW cost per GB 0.2 0.4
Storage utilization (IC wide average) 0.8 0.5
Depreciation (years) 5 4
Node provider incentive 1.8 2
Local protocol overhead (storing multiple checkpoints and state diffs) 3.5 (requires protocol optimizations) 5

Table 1: Assumptions for the considered scenarios

Scenario 1:
(0.2 USD/GB) / (0.8 utilization) / (5 years depreciation) * (1.8x node provider incentive) * (3.5x per-node overhead) = 0.315 USD / GB / year per node

Scenario 2 (more conservative):
(0.4 USD/GB) / (0.5 utilization) / (4 years depreciation) * (2x node provider incentive) * (5x per-node overhead) = 2 USD / GB / year per node

Taking the above scenarios gives us a spectrum of storage hardware cost incurred per GB per year per node ranging from 0.315 to 2 USD. Looking at the assumptions, one can get a feeling for viable storage pricing in subnets with a given replication factor. Even the less conservative estimate does not yet consider expected storage price decreases due to advances in semiconductor processes. Thus, by projecting expected costs to a 5-year horizon, we may be able to obtain better figures than the ones above. Note, however, that cost items such as the remaining hardware of node machines and operational expenses have not been considered at all here, but also that those will be amortized in parts through charging for other resources, such as Wasm execution. The above is really to be seen as a rough indicative assessment of the cost.

We hope to clarify things with this post and are looking forward to a discussion with you on this topic.

14 Likes

I think this approach to calculate price per GB is a bit flawed as it doesn’t take into account a crucial detail: node providers are paid a fixed monthly rate for their service. The rewards vary based on country and in future based on specs too, for simplicity’s sake let’s assume all nodes are paid the lowest rate: 873 XDR/month or 1111$ at current exchange rate and subnets have 1TB state as that is the long term plan afaik.

If the IC keeps charging 5$GB/Y deflation might never be possible, with this pricing it’d only cost 5000$/month to occupy an entire subnet’s state and block new canisters from using it, opposed to the 14k$ minted as reward for NPs in that subnet, computation MIGHT make up for it but not necessarily.
Bad actors could use this as an attack vector to waste the network’s performance/profitability and hinder existing dApps, e.g dApps that need to create new canisters.

Currently we have 35~ subnets, so an attacker could spend 52,500$/yr (5$ x 300GB x n of subnets) and waste almost the IC’s entire computational capabilities, even with 1TB state subnets it’d only cost 175k/year, not much for such a disruptive attack.

If we want to mitigate this, storage costs should be based on the worst case scenario: a subnet whose entire state is used but computation is never executed on it, this means the formula to calculate GB/year cost on a given subnet should be: (single node yearly rewards * replication factor)/GBs of state. Even in the most optimistic scenario, where all nodes are paid the lowest rate and with future state increases accounted for, we have (13,332 * 13)/1000 = 173$ GB/Y.
This is obviously too high for devs and assumes 13 nodes subnets being the norm! The cost would obviously increase as the subnets grow and even if we had 1 node subnets a GB should cost 13$/Y to prevent this scenario.

I’m not advocating for price increases by any means, I would like to see them lowered, but if network profitability and deflation are the long term goal something has to be done, looking away and hoping computation might make up for this flaw could hurt the IC bad eventually.

I don’t have any concrete proposals to this issue, but imo Dfinity should stop treating all data in the same way, static data and data which has to be processed can be handled with different mechanisms, e.g files could be stored without replication using erasure coding.

12 Likes

What subnet size are you assuming in these two scenarios?

If the IC keeps charging 5$GB/Y deflation might never be possible, with this pricing it’d only cost 5000$/month to occupy an entire subnet’s state and block new canisters from using it, opposed to the 14k$ minted as reward for NPs in that subnet

I can’t edit the original post, so I’ll say it here:
Just noticed I made a mistake in that paragraph: 5000$ is the price per year, not per month (5$ * 1000GB) and the average of rewards minted per subnet is 175k/year, while ~14k$ is the monthly amount.

1 Like