Proposal to change Neuron Following Limits

Context: Considering Factors that Affect Scalability

As we prepare to migrate neurons to stable memory early next year, we have been considering the longer-term scalability of the NNS voting system.

One major question about the upper bounds of how many neurons can be stored and participate is the size of the neurons, as well as the ways the neurons are used in computations.

Misunderstandings about Following

Neuron following is a much-used and often misunderstood feature of neurons, and is the source of most of the cost of processing votes. (See also NNS Dapp guide for context)

Many users set up following expecting following more than one neuron to result in missing fewer votes. But in fact, following more than one neuron means that the majority of the followed neurons need to agree in order for your neuron to follow.

For neurons only optimizing for rewards, having more than one followee is against the interest of the neuron holder.

If a neuron wanted to vote according to a broader community, following a handful of organizations’ beacon neurons might be desirable, even if it meant risking missing some votes.

Beacon neurons, which may be managed by a large number of principals voting, is the one use case where a large collection of followees makes sense.

Impact of Following on Scaling

To prepare for future growth and make upgrades safer, we are moving data into stable memory. However, this means some operations will take longer. Vote Cascading is one of those operations, and we are looking for ways to improve the worst case scenarios. Limiting followees is the biggest impact change that we can make.

While the actual computational costs are based on the actual data in the canister, the limits in the canister to protect against attacks are based on the worst-case scenarios within those limits. In the interest of keeping a responsive governance canister as we scale, we would like to lower the number of followees a neuron can have in the normal case.

Proposed Changes

Therefore, we propose the following changes:

  1. Limit Normal Neurons to no more than 5 followees per topic, and 35 followees total (this allows for making exceptions to each rule, and still allows following enough high-quality neurons to let your vote be decided democratically).

  2. Optionally, for neurons that need the ability to store the full 15 neurons on all topics, charge a one-time fee that would discourage idle use of the feature, but not large enough to prevent organizations from setting up Beacon Neurons.

For existing neurons with more followers, the following would not be affected until such time as it needs to be changed. Users would have to get back under the limits in order to add new followers to existing neurons.

We believe this would affect a very small number of existing users.

Looking forward to a thoughtful discussion!

6 Likes

These changes seem reasonable. I appreciate the fact that consideration has been given to neurons that need more than 5 Followees. CodeGov and Synapse both fall into this category. CodeGov uses up to 10 Followees for IC-OS Version Election and Protocol Canister Management and may one day have funding to add up to that many for several other topics. I would also want to have the ability to add up to 15 Followees for Neuron Management. Synapse currently utilizes up to 15 Followees for the Governance, SNS & Neurons Fund, and Neuron Management proposal topics. In both cases, one to three other neurons are configured for all other topics. It sounds like all of these scenarios are covered. Please advise if I misinterpret the application of this change to these neurons. To be honest, I would love to have the funding to have up to 15 reviewers for each proposal topic for CodeGov, but that’s definitely a stretch. I just wouldn’t want the limits too restrictive to form a solid team of reviewers for each topic.

I don’t mind paying a one time fee for each of these neurons. Proper configuration of a neuron is fully understood and actively used for maximum benefit (credibility and reliability) for our governance strategies at both CodeGov and Synapse.

For reference, the neuron configuration for CodeGov can be found here and the configuration for Synapse can be found here.

Also, will this apply to the SNS framework as well or just the NNS? CodeGov has already started creating publicly disclosed neurons for SNS projects such as WaterNeuron and KongSwap and I would like to grow this list in the future.

1 Like

Hi @msumme,

Thanks for the announcement and explanation. Can I ask roughly how much the one-time fee would be, or at least an upper limit for how much we can expect it to be?

D-QUORUM has 15 co-founders who are all followees on the Neuron Management topic. I’d like to think it will have a similar number of followees on other topics in the future.

1 Like

Currently, this will only apply to the NNS. There isn’t quite as much of a need in the SNS, as SNSes aren’t migrating neurons to stable memory yet. Eventually, there will probably be a mechanism available in SNSes as well.

1 Like

This is a great question. We haven’t decided yet. We’ll be doing some analysis to figure out what it ought to be. We don’t want it to be onerous, but do want to make it high enough to discourage unnecessary usage.

1 Like

Okay, thanks. I think this is an important detail (difficult to have an opinion on this without knowing how expensive the currently free functionality will be). Are you planning to pin down something more specific that can be shared before proceeding with this proposal and/or this implementation?

What about taking advantage of heap memory’s performance and Motoko’s new Enhanced Orthogonal Persistence feature coupled with wasm64?

Migrating this functionality to Motoko would eliminate the need to put limitations on following since I’d anticipate that wasm64 heap memory capacity is going to grow significantly this year, and would avoid future performance bottlenecks related to traversing BTree-like data structures and repeated reads stable memory. This seems to potentially be the best way forward if performance is the primary goal.

In terms of data upgrade safety, canister snapshots help a great deal here, as copies can be taken immediately before each upgrade and restored immediately in the case of a bad upgrade.

And then in terms of the performance related to heap upgrades with a lot of data, upgrades that use Enhanced Orthogonal Persistence are instant - there’s no downtime related to serialization to stable memory or deserialization back from stable memory into heap.

This would also be a great opportunity to start building more core system canisters in Motoko, the language of the Internet Computer, as well as to help the language mature and drive it toward corporate adoption/and have more examples to generate AI code from.

I’d be happy to help consult and contribute to this initiative if there’s interest from the foundation.

3 Likes

This is something that we discussed, but it would be a massive effort to migrate to Motoko, and there doesn’t appear to be an incremental path. In practice, this would mean continuing development on new features in Rust, while also duplicating those efforts in Motoko while continuing to migrate functionality. This is not something we’re going to do this year, although at some point in the future, there might be enough reasons to consider it.

The most likely path in the near future is that we add a heap cache after it’s no longer hosting critical data (i.e. when it’s not a source of truth).

2 Likes

Sorry I missed this question earlier.

Yes, we’re planning to pin something down and sharing it before we proceed.

We were discussing internally, and @bjoernek had the suggestion of raising the minimum stake / dissolve delay of neurons that need extended following.

As a possible value, 10 ICP staked for at least 1 year would be required to have more than the normal amount of followees.

In that case, there would not be a fee (i.e. ICP burned) but rather just an indication that the neuron has some skin in the game.

What do you all think?

5 Likes

Sounds like a good solution to me, thanks @msumme and @bjoernek. I guess these thresholds can also be adjusted in the future if ever needed.

2 Likes

This would be fine with me. Seems like a very reasonable fee-like alternative.

2 Likes

On the other hand, doesn’t this represent the majority (rather than the minority) of neurons?