An interesting claim was made on Twitter suggesting that the Internet Computer (IC) will face scaling challenges with the widespread adoption of dApps, due to the current 300GB limit for canisters/subnets.
I’m keen to know if there are any existing plans or roadmap items to tackle this issue.
I’m not sure on the exact numbers, but I think the limit has increased already. In short: various teams are working on a bunch of different optimisations that will allow for larger subnets. For example one of the biggest limitations are checkpointing times, where the full subnet state is snapshotted. Or there are ideas to reduce the total state size by deduplicating some parts of storage.
+1 to @Severin’s reply. The limit has already been increased to 450GB. Several engineers are working on this problem and more increases will follow soon. Stay tuned.
I wonder how will a chart of capacity (heap/stable_memory/subnet), throughput (speed/capacity), and computation (instruction_limit/speed) look like over the years. I have the feeling some are doubling every year. Perhaps we can see something like Moore’s law forming.
The other thing that is being worked on (actually, it is pretty much complete) is subnet splitting.
Meaning that the subnet size limit is a lot less of a concern: if your dapp grows beyond 450 GB, its canisters will just be distributed across multiple subnets.
From the point of view of dapps, subnets are virtual machines. So there are definite benefits to running your complete application on a single (virtual) machine: higher communication throughput, lower latency, less variability. But, as with all systems, there are upper limits to how far a single machine can scale. So large dapps must eventually scale across subnets.
More control on how things get split will also be introduced eventually, so e.g. the dapp controller can ensure sensible sharding. And so that dapps that don’t require TBs of state can have all their canisters stay together on a single subnet.