Proposal 126769: Update list of default subnets

Hello community,
I wanted to provide some background on proposal 126769.

The change this NNS proposal is doing if accepted is that subnet o3ow2 is no longer on the list of default subnets.

What are default subnets?

When a developer creates a new canister using the cycle minting canister, it creates the canister on a random subnet of the default list. Removing a subnet from the default list does not mean that no new canisters can be created on that subnet. Developers that already have a canister on that subnet are still able to create more canisters there, but it stops the inflow of new developers to that subnet.

There is a more detailed explanation of how canister creation works here.

Why o3ow2?

This subnet currently uses around 640GB of canister memory (see the dashboard). The current limit is 700GB, so it’s somewhat close.

That same subnet is currently running a special replica version in order to give the developers currently using most of that memory, HotOrNot (@saikatdas0790), the necessary tools to reduce their canister’s memory consumption again. More details in this thread.

In the meantime, this situation can affect other developers on the same subnet. For example in this thread (cc @skyhigh) a developer ran into an issue they would have avoided if their canister got randomly assigned to a different subnet.

So as a QoL improvement, the proposal removes the almost full subnet from the default subnet list for the time being, avoiding the potential confusion for developers who get randomly assigned there. The intention is to revisit the list of default subnets again later, when o3ow2 is no longer an outlier.

6 Likes

At a glance it seems to me for sub et to make economic sense, it has to be able to hold terabytes of data, not the current 700 GB. A 13 node subnet at 3k in rewards per month is 42k dollars for 700GB. Is there a plan to substantially increase those limits? What am missing here?

@Kyle_Langham @domwoe @Accumulating.icp

1 Like

Increasing the storage limit further is certainly a goal and an ongoing effort. For example, the roadmap list 1TB as “in progess” and 3TB as “future”.

I don’t have a concrete timeline, but we are continuously removing technical blockers for more state. One project I am involved myself is a rework of how large states are represented on a replica’s disk. It aims to provide the performance to do the next major bump, possibly close to that 3TB mark.

5 Likes