Hi everyone,
With exciting developments like the Caffeine AI platform potentially bringing a significant influx of new developers to the Internet Computer, I’ve been pondering how we can ensure our subnet architecture is as optimized and scalable as possible. I have a few observations and questions I’d love to discuss with the community.
-
Data Storage Costs vs. Subnet Maintenance:
As I understand it, the cost to store 1GB of data is roughly $5 per year. A subnet with, say, 2TB of capacity, could theoretically be filled by dapps spending a collective $10,000 per year on storage fees. However, the actual cost of running and maintaining the nodes for that entire subnet is likely considerably higher. This suggests a potential discrepancy where the direct revenue from storage might not fully cover the operational costs of the infrastructure providing that storage, especially if a subnet is storage-heavy but compute-light. -
Canister Limits, Inactive Canisters, and Archival:
The current limit of around 100,000 canisters per subnet is generous, and the low cost of creating canisters is a fantastic feature for developer onboarding. However, if ICP is to attract a developer base significantly larger than Ethereum’s, we could anticipate a massive proliferation of canisters. Many of these might be experimental, test projects, or simply abandoned shortly after creation.
These dormant canisters would still reside on subnets, consuming resources and counting towards the limit, even if they perform zero computations. This could lead to subnets becoming “full” of inactive canisters, making them uneconomical.
Are there any mechanisms in place or planned for canister archival?
It would be highly beneficial if canisters (especially those like individual user data canisters in dapps like Open Chat that become inactive for long periods) could be archived to optimize costs and then efficiently restored if the user returns.
3. Underutilized Subnets and Cycle Burn:
We can observe subnets that host a large number of canisters, yet their instruction execution count is minimal. Consequently, the cycles burned on these subnets might not be sufficient to cover the costs associated with maintaining their constituent nodes. While it’s crucial for ecosystem-critical applications to operate affordably, the long-term hosting of numerous idle user-created canisters might not be economically efficient for the network.
- Future Scaling Plans – Capacity and Subnet Splitting:
I’m aware of plans to increase the number of canisters a subnet can host, which is a positive step. However, I’m trying to understand the implications of increasing individual canister storage capacity significantly (e.g., up to 500GB) when a subnet’s total capacity might be around 2TB. If a subnet ends up hosting just four such large canisters, it seems unlikely they would collectively burn enough cycles through computation to cover the maintenance costs of the nodes, especially if they are primarily storage-focused.
I’ve also heard discussions about “halving” subnets.
Could someone elaborate on how this mechanism is envisioned to work?
For instance, could a single node effectively serve two “halved” subnets? Perhaps by participating in consensus for a data-intensive subnet in one round, and then validating for a compute-intensive subnet in the next? Such an approach could potentially optimize the utility of each node machine and the overall network resource allocation.
My goal here is to understand these aspects better and discuss how we can proactively ensure the ICP’s subnet architecture remains efficient, cost-effective, and robustly scalable for the exciting growth ahead.
Looking forward to your insights and any clarifications!