Long Term R&D: Storage Subnets (proposal)

1. Summary

This is a motion proposal for the long-term R&D of the DFINITY Foundation, as part of the follow-up to this post: Motion Proposals on Long Term R&D Plans (Please read this post for context).

This project’s objective

Currently, the Internet Computer has two types of subnet blockchains: system (with high replication) and application (with medium replication). This project is to add support for a third subnet type to support dapps with high storage requirements.

2. Discussion lead

Akhi Singhania

3. How this R&D proposal is different from previous types

Previous motion proposals have revolved around specific features and tended to have clear, finite goals that are delivered and completed. They tended to be measured in days, weeks, or months.

These motion proposals are different and are defining the long-term plan that the foundation will use, e.g., for hiring and organizational build-out. They have the following traits and patterns:

  1. Their scope is years, not weeks or months as in previous NNS motions
  2. They have a broad direction but are active areas of R&D so they do not have an obvious line of execution.
  3. They involve deep research in cryptography, networking, distributed systems, language, virtual machines, operating systems.
  4. They are meant to match the strengths of where the DFINITY foundation’s expertise is best suited.
  5. Work on these proposals will not start immediately.
  6. There will be many follow-up discussions and proposals on each topic when work is underway and smaller milestones and tasks get defined.

An example may be the R&D for “Scalability” where there will be a team investigating and improving the scalability of the IC at various stages. Different bottlenecks will surface and different goals will be met.

3. How this R&D proposal is similar to what we have seen

We want to double down on the behaviors we think have worked well. These include:

  1. Publicly identifying owners of subject areas to engage and discuss their thinking with the community
  2. Providing periodic updates to the community as things evolve, milestones reached, proposals are needed, etc…
  3. Presenting more and more R&D thinking early and openly.

This has worked well for the last 6 months so we want to repeat this pattern.

4. Next Steps

Developer forum intro posted
1-pager from the discussion lead posted
NNS Motion proposal submitted

5. What we are asking the community

  • Ask questions
  • Read 1-pager
  • Give feedback
  • Vote on the motion proposal

Frankly, we do not expect many nitty-gritty details because these are meant to address projects that go on for long time horizons.

The DFINITY foundation’s only goal is to improve the adoption of the IC so we want to sanity-check the projects we see necessary for growing the IC by having you (the ICP community) tell us what you all think of these active R&D threads we have.

6. What this means for the existing Roadmap or Projects

In terms of the current roadmap and proposals executed, those are still being worked on and have priority.

An intellectually honest way to look at this long-term R&D project is to see them as the upstream or “primordial soup” from which more baked projects emerge from. With this lens, these proposals are akin to asking, “what kind of specialties or strengths do we want to make sure DFINITY foundation has built up?”

Most (if not all) projects that the DFINITY foundation has executed or is executing are borne from long-running R&D threads. Even when community feedback tells the foundation, “we need X” or “Y does not work”, it is typically the team with the most relevant R&D area that picks up the short-term feature or project.

8 Likes

Please note:

Some folks gave asked if they should vote to “reject” any of the Long Term R&D projects as a way to signal prioritization. The answer is simple: “No, please, ACCEPT” :wink:

These long-term R&D projects are the DFINITY’s foundation’s thesis at R&D threads it should have across years (3 years is the number we sometimes use internally). We are asking the community to ACCEPT (pending 1-pager and more community feedback of course). Prioritization can come at a separate step.

3 Likes

Hi all, I am Akhi Singhania. I will be the discussion lead for this proposal. I am the senior engineering manager for the Execution team. My background is in operating systems and distributed systems. Before working on the Internet Computer, I worked on OpenOnload and Barrelfish.

6 Likes

Summary

This project aims to improve the support for dapps with high storage requirements on the Internet Computer.

Background

Currently the Internet Computer has two types of subnet blockchains: system (with high replication) and application (with medium replication).

System subnets with high replication factor are necessary for platform crucial services like the NNS. They are costly to operate (lots of nodes are needed); slower to update (the finalisation rate is slower to accommodate the additional nodes); but offer very high security. Application subnets have medium replication so they are cheaper; faster to update; but have slightly lower security.

Objective

As the Internet Computer evolves, it is possible to imagine other types of subnets that operate at different points on the design spectrum and have different trade offs between security vs. speed and cost.

This motion proposes that the DFINITY organisation invest engineering resources into researching and developing additional types of subnets. More concretely the motion proposes to explore the concept of storage subnets. The core feature of this subnet type will be:

  • It uses node machines with higher storage capacity and
  • It operates with fewer nodes (smaller replication factor) than other subnet types.

In order to realise this new subnet type, research into the following topics will be needed:

  • Intra-subnet protocol improvements: to ensure that a subnet with a large state is operational.
  • Inter-subnet protocol improvements: to ensure that a less secure subnet cannot impact the security and functionality of other subnets.
  • Data integrity improvements: to ensure that data integrity is maintained with lower replication factor.

Why this is important

At 5 USD / GiB / year, the Internet Computer today already has very competitive fees for storage. This feature will allow the IC to offer storage to dapps at even lower costs (albeit with potentially different semantics and guarantees). Lower storage costs will enable a new class of dapps on the IC that today are prohibitively expensive to run on the IC and help improve resilience of existing dapps by allowing them to store backups of their data.

Due to the lower replication factor, fewer nodes will be needed to provide the same amount of storage capacity on the IC. This means that for the same number of nodes, the IC will be offering a bigger storage capacity.

Topics under this project

Intra-subnet protocol improvements

The new storage subnets will have larger states. So additional improvements to the protocol and implementation will be needed to ensure that the large states are properly handled. One obvious component that will have to be improved is state synchronization. This component is responsible for allowing new nodes or slow nodes to catch up with the latest state. Improvements will be needed to ensure that nodes can still catch up even with larger states.

Inter-subnet protocol improvements

Due to the lower replication factor, the new subnet type might be easier to corrupt or to stall. Protocol improvements will be needed to ensure subnet isolation so that faults in one subnet cannot spread to other subnets.

Data integrity improvements

A subnet with a lower replication factor can tolerate fewer corrupted nodes. Protocol improvements will be needed to ensure that as long as at least one honest node is available, data integrity will be guaranteed.

Discussions leads

@akhilesh.singhania , @bogdanwarinschi , @derlerd-dfinity1

Why the DFINITY Foundation should make this a long-running R&D project

This project will enable an important class of dapps on the IC. Additionally, a number of protocol and implementation improvements required to achieve the goals of this project will also be applicable to other parts of the IC and improve the IC in general.

Skills and Expertise necessary to accomplish this (maybe teams?)

Due to the complexity of the initiative, a broad selection of skills as outlined next:

  • System design

  • System level software engineering

  • Algorithms, complexity

  • Probability theory

  • Cryptography

  • Deep understanding of Internet Computer consensus

  • API design

At least the following teams are likely required:

  • Research

  • Networking

  • Consensus

  • Message Routing

  • Execution

  • Security

Open Research questions

There can be multiple other mechanisms to achieve the desired goals of this project.

Another idea is to use erasure codes to split the data on multiple nodes. With this latter approach you could have a subnet with many nodes and high resilience, but where the total storage overhead is small (<2x). Communication is higher during storage and retrieval though (<2x) and the nodes that store data must compute the codewords. Also, search is not as easy, so it depends on whether data will be fully at rest. Also, if the nodes running the storage network change, you have to run an expensive recoding.

Examples where community can integrate into project

As already mentioned before, in the initial phase of this motion input regarding refining the scope and priorities of this project from the community is highly appreciated. In addition many technical discussions with the community are anticipated as the motion and research and development of potential technical solutions to address the goals of this proposal move forward.

What we are asking the community

  • Review comments, ask questions, give feedback

  • Vote accept or reject on NNS Motion

10 Likes

Thanks for the overview. Excited to see how this develops.

Two questions:

  • It operates with fewer nodes (smaller replication factor) than other subnet types.

With fewer nodes, a subnet may struggle to serve queries with high throughput and low latency. Storage subnets will conceivably store large assets like >1 GB video files. Serving 1 GB may require up to 500 queries (500 queries * 2 MB per query request = 1GB). It’s already at least 3x slower serving images from IC versus conventional CDNs. I worry that reducing the # of nodes may make this worse.

At 5 USD / GiB / year, the Internet Computer today already has very competitive fees for storage. This feature will allow the IC to offer storage to dapps at even lower costs (albeit with potentially different semantics and guarantees).

This is a great opportunity to understand how “prices” in IC are set. This is something that’s been bugging me for a while. Right now, storages costs ~5 USD / GB / year on the IC. How were the cycle costs determined? Couldn’t we all vote to lower that to, say, 2 USD if we wanted? What economic considerations are important when setting “prices”? A higher price means more cycles (and ICP) will be burned. What is the right deflation rate to target for a healthy ecosystem? Is it OK to accept more inflation in the short term (e.g. lower storage costs) in order to attract developers? I have so many questions.

In a cloud like AWS, the price is set based on ordinary business metrics like SKU unit economics, margin, revenue, etc… How should prices be set on the IC?

4 Likes

Also, with respect to serving asset files, I think the design of a storage subnet should be informed by the (future) design of IC boundary nodes. If a storage subnet is an IC data center, then the boundary nodes are the IC’s CDN. They are related.

1 Like

This is a good point. An idea that has been discussed is to have subnets where not all nodes participate in consensus and only serve queries. Such subnets would be ideally positioned to serve query heavy workloads.

There are applications that benefit from large storage but do not necessarily need a lot of query capacity (e.g. backups). However, if the primary usecase is of serving queries, then such subnets may indeed not be ideal.

From my perspective, the current prices are sort of a finger in the air estimate of what they should be. What prices to charge for something as you can imagine is a very complicated subject indeed. The ideal price should be where the supply and demand curves meet. The community could of course make a proposal to set the price to 2 USD or even 200 USD. But if that is not where the curves interact, we will either have too much supply (not enough usage) or too much demand (not even free capacity left).

I don’t know what the best way to set fees on the IC could be. I just have some intuition from my basic understanding of economics. I would love to hear from others on this subject. It is probably wise to fork off a thread for this topic though.

Yes, another very valid and fair point. The two functionalities are deeply connected and one should not be evolved without informing the other.

3 Likes

This is a good point. An idea that has been discussed is to have subnets where not all nodes participate in consensus and only serve queries. Such subnets would be ideally positioned to serve query heavy workloads.

Yeah, I was thinking of something similar, but even more generally. What if anyone could host a query node?

Kinda like anyone can run an Ethereum node but not necessarily participate in mining / validating new blocks. But it’s even easier in IC because you don’t need to download the entire 400 GB blockchain like you do in ETH; you just need to download a catch-up package from the subnet you’re running the query node for.

There are applications that benefit from large storage but do not necessarily need a lot of query capacity (e.g. backups). However, if the primary usecase is of serving queries, then such subnets may indeed not be ideal.

Exactly. S3 and GCS have different storage classes for different cost + latency requirements.

What prices to charge for something as you can imagine is a very complicated subject indeed. The ideal price should be where the supply and demand curves meet.

This is something I’ve been thinking about. Why is the price for 1 GB / year storage ~$5 on the IC but ~$840,000,000 on Solana? Is it really because the curves for the supply for storage and the demand for storage intersect at a point much lower on the IC than on SOL? The economics are hard to wrap my head around, since there’s not a conventional supply curve here. Node providers supply storage but also supply compute, and are compensated with monthly rewards based on SLA instead of on a per-unit basis. I’d be curious about the decision rationale of the current cycle costs.

2 Likes

In theory this should work. I suppose we would start off with the simpler version of this where the nodes are similarly capable and managed in a similar style as all other nodes. Once the engineering is worked out from that, then extend…

I think we roughly calculated it as following.

  • eventually the IC protocol and implementation will be sufficiently improved that a subnet with 3TiB of disks will be offer around 1TiB of storage to canisters.
  • A disk has a lifetime of 4 yrs.
  • $5 / GiB / year => $20 / GiB / 4 yrs => $20’000 / TiB / 4 yrs.
  • For a 7 node subnet, that is a little less that $3k per node and for a 13 node subnet, that is around $1500 per node.

As more experience is gathered from operating the IC, the above assumptions will have to be revised.

5 Likes

Proposal is live: Internet Computer Network Status

What will be happend when the storage subnet on live? If we have storage much files, such as NFT metadata in canisters on the application subnet, how can we move the data to the storage subnet?

Are there any updates on this?

1 Like

Just chipping in to suggest that legal, privacy and censorship issues should be a key part of the design consideration, and noting that:

  1. Even defining the objectives and potential design space will require some debate and learning from prior art.
  2. There is interdependency with other research programs and ongoing debates such as DNS as censor or boundary nodes as censor, or the role of the NNS.

Are there any updates on this? I’d like to store videos and images on-chain, but the $5 / GB / year will need to be brought down (by 2-3 orders of magnitude) to make it economical to do so.

4 Likes

If the internet computer is the truly match the internet and four people to be able to build any application on it, we all likely going to need a storage subnets that can compete on price and performance with the likes of AWS and other decentralized storage networks. $5 per gig while it is cheap for a blockchain, but it is still prohibitively expensive for running data intensive applications. I wonder if there is any progress at all to make this happen on the internet computer.

3 Likes

Competing with AWS is impossible unless Amazon earns an incredibly high margin on data storage, the inherent overhead of a decentralized system will always make it more expensive than a centralized one and if we want deflation the cycle cost per instruction can’t be arbitrarily lowered, instead I think we should aim to compete with similar systems focusing on decentralized storage only: Filecoin, Arweave, Storj to name a few. That’s a much more realistic goal

4 Likes

Whats your definition of “compete”?

1 Like

Pricing mainly, on Arweave 1GB costs 8$ for 200 years worth of immutable storage, that’s orders of magnitude cheaper than IC. If you add the cost to upload the data to the IC, which according to this tweet, is around 7$ per GB, any app with non trivial memory usage is not economically viable on the IC.

6 Likes

@akhilesh.singhania any update on storage subnets?

2 Likes

Hey guys,
Sorry for not updating you for such a long time.

While this is a long term vision, there some concrete steps in this direction done or in progress (see the last Global R&D):

  1. Stable Structures – allows to store data directly on the stable memory, repo.
  2. High-repl. Subnets – allows to choose a subnet type with a higher replication factor or more storage space.
  3. Replica HW 2.0 – a step toward having subnets with more storage.
  4. Big Data – Scalability and Performance group is working on allowing canisters to access much more data. It’s open, you can find more info and join on our wiki.
  5. 32GB Stable Memory – allows canisters to have up to 32GB of stable memory.

Sorry, it’s just from the top of my head, and we don’t have any price targets yet… But we definitely work on storage.

10 Likes