Subnet Management - 4zbus (Application)

Thanks @sat, my last comment was a bit of a brain fart (and certainly was caused by an off-by-one error, in my head). I expected the country coefficient update to look like this,

  • e[2m country: 1.00 → 2.00 (+0%) e[0m, rather than this,
  • e[2m country: 2.00 → 2.00 (+0%) e[0m,

because I was considering 4 malicious nodes to be needed to take down a subnet (such as the 4 in the US prior to this proposal executing), whereas 4 is of course the max number that can be tolerated (not the min number that cannot be tolerated).


That being said, I stand by the points that I raised in my main post. In order to comply with formal subnet limits, the Nakamato coefficient for the country characteristic needs to be 3. Hence 2 is too small (requiring the nodes of only 2 countries to collude in order to theoretically attack the subnet). This is documented in more detail here →

^ Given this, I would reiterate my point.

I believe an additional proposal is needed to get this subnet back within the acceptable limit according to the current and revised target IC topology (a limit of 2 nodes per country, meaning a minimum country Nakamoto coefficient of 3).

I wouldn’t be surprised if other subnets have similar issues (I’ll run some analysis when I get a chance, maybe tomorrow).


Regarding the unassigned South Africa and Australia nodes, I was already filtering out nodes using the strictest of constraints (only returning unassigned nodes that did not share a single characteristic with the existing subnet nodes i.e. continent, country, node provider, data centre, owner).

These nodes could therefore only have had no effect or a positive impact on the Nakamoto coefficients of the subnet (not a negative impact). Here are a handful of example candidates:

This approach is problematic. For example, none of the unassigned nodes illustrated on the map that I rendered above would result in an improved Nakamoto coefficient by themself (for anything other than continent). This doesn’t mean that they wouldn’t improve decentralisation (essentially making it easier to improve the other Nakamoto coefficients with subsequent node swaps in the future).

I think the crux of the issue is that the Nakamoto coefficients are throwing away information because they’re discretised. This is demonstrated by the fact that they weren’t even affected by this proposal. I think it would probably make more sense to optimise for subnet limits directly, and therefore indirectly optimise for the Nakamoto coefficients (so that all information is available and considered during the optimisation procedure).

1 Like