Decentralisation and subnet size

In discussions with other people I often hear the claim that internet computer is not decentralised. I think we all know this statement. In the end we have to admit that the subnet size between 13 to 40 nodes is indeed a relativ small number and for most people an argument that needs to be taken seriously.

My question: Why is it not possible to increase the size of subnets like NNS, SNS and Fiduciary to numbers like 100 or 1000? Or in other words are there any limitations on the protocol level to do so?

I am very convinced that the increase of this number would help to grow the acceptance of ICP.

Its possible yo increse the size but its not clear there would be any security gains. There definitely would be a performance hit. People are building L2s with a single sequencer. They need to fix that first.

It’s not about appealing to the mainstream by increasing subnet sizes; rather, it’s about ensuring sufficient decentralization. The NNS possesses the ability to disconnect nodes if any tampering is detected.

Mainstream is viewing blockchain through dino’s lens, but ICP is doing its own thing, so it’s actually an affordable solution to run your software fully on-chain.

As an analogy, you can secure your house with 1 billion locks to make it very unlikely that a break-in will happen. However, for mainstream might still seem insufficient, suggesting that it should be secured with at least 1 trillion locks. Besides making the process of securing the house much longer, this also wastes a lot of resources unnecessarily.

What I would like to see on the long-term roadmap is node shuffling.
However, due to the fact that ICP is not just a ledger but also holds a vast amount of data, this is easier said than done if it needs to be done without wasting too much resources and bandwidth.

2 Likes

I wonder what the limitations are an how they would effect cost and throughput of the platform. So for example let us imagine a subnet with 1000 replicas. What would be the effects for this subnet and for the whole platform?

My understanding is that the latency of consensus would increase, and the cost would probably increase linearly with each node added (at least linearly) as you have to cover paying for the network and compute resources of each machine.

If there aren’t any limitations why not increase the size of the fiduciary subnet. I assume that such canisters just contain the logic for ledgers meaning small memory footprint and little computation.