Reevaluating Neuron Control Restrictions

I don’t understand why we need PoK to achieve this. It seems like a good initiative—removing the workaround for canisters controlling neurons—while at the same time adding some unnecessary complexity and penalizing some neurons. Why can’t we simply allow canisters to control neurons, period?

2 Likes

I agree. That would pose a security risk that couldn’t sustain a resilient ecosystem.

Having a way for DAO controllers to exercise selectivity and discernment of who is allowed to participate in the governance of the digital service that they share was something that was considered and accounted for.

The result is a system in which control of the neuron is determined proportionally to how much each Principal has contributed to that neuron.

The discussion surrounding the non-transferability of neurons within the ICP ecosystem raises important considerations that deserves a comprehensive review. The intent behind fostering long-term commitment and decision-making is important; however, several aspects could be addressed:

  1. Governance Dynamics
    Assuming neuron holders will maintain indefinite interest or capability in governance overlooks inevitable life changes and mortality. Humans are not immortal, and our ability to contribute to projects like the ICP is naturally finite. Transferability ensures governance remains dynamic, passing the responsibility to those currently engaged and capable of steering the protocol’s future when others are no longer able to contribute.

  2. Fairness and Motivation
    The notion of being “imprisoned” might be effective for achieving a definite task with a clear target, having such lockup for a broader governance context—requiring evolving goals and long-term engagement—might be highly demotivating. Participants’ influence is waning as they begin to dissolve their neuron, which highlights the need for a system that allows stakeholders to exit on their own terms. Possibility to sell the neuron recognizes their contributions without making them dependent on the governance outcomes decided by the remaining participants, especially when they have to wait for 8 years.

  3. Long-Term vs. Short-Term
    The assumption that non-transferable neurons inherently lead to better long-term decision-making overlooks the complexity of predicting the impact of current proposals. While emphasizing the importance of the long term by locking users into their decisions may seem like a way to protect the protocol’s future, it does not shield the protocol from harmful and damaging impacts in the short term.

  4. Coordination Risks
    Relying solely on non-transferability to safeguard against malicious actors ignores the potential for influence through non-financial means, such as accumulating followership for coordinated governance attacks. This approach does not fully protect the protocol from all forms of manipulative behavior.

  5. Market Dynamics
    The existence of a market for Internet Identity (II) and the observed behaviors around neuron transferability indicate a natural demand for flexibility in participation rights. A healthy, free market could incentivize constructive contributions to the protocol’s success, as the value of participants’ stakes is inherently tied to the ICP’s overall prosperity, future potential, and neuron attractiveness for potential buyers.

5 Likes

Who would be the entity that records that dox list? Would this require giving said entity some sort of special privileges to waive fees for known neurons that they deem to be sufficiently dox’ed?

I would add here. All neurons should be equal. We already take into account dissolve delay, stake, and age, which are more than enough. Today, we are discussing penalizing neurons created using a canister, for example, for liquid staking or within canister-wallets; what’s next, penalizing individual neurons that the majority dislikes? That’s a path leading nowhere.

4 Likes

I believe the primary challenge is understanding the impact on security when transferring a neuron. In my opinion, the effect is minimal, given that any neuron transfer requires a buyer.

The example provided seems a bit extreme because it involves a flash loan. However, this particular issue can be mitigated by simply delaying the execution. I expect that most stakeholders will avoid markets that could potentially concentrate power and act maliciously.

Lastly, the option to exit a DAO is particularly appealing to me, reminiscent of the Moloch DAOs in the Ethereum ecosystem.

My Twitter feed keeps blowing up with supporters of IDGeek claiming this proposal would break their ability to sell/transfer neurons.

I don’t actually see anything to suggest as much so I’m just wondering if someone here can help me understand their concern.

I see a few options:

  1. The NNS. I’m not in favor of giving more and more responsibility to the NNS, but it is a logical and most obvious point to put the list. Especially given the code that actually awards neurons. Having the canonical list on the same canister reduces the cost of the calculation significantly and reduces integration costs.

  2. A purpose-built SNS. Handing control of the list over to an SNS where people can engage by buying the token that secures the list and are rewarded with a) forfeited ICP of those that violate the smart contract(maybe…I don’t like sticks when carrots will do). b) inflation c) some reward from the NNS for helping further secure the network(we do it for nodes, why not other forms of security. negative is more inflation…needs to be worth it)

  3. Some kind of permissionless system. If you can permissionless apply the binding of the controller to your canister, that is sufficient for people that are 1. doxed and can later prove they own the neruon (perhaps we have an ICRC standard with function that must be called monthly by a PoK principal that is the controller?) 2. Can prove it is an SNS.(Maybe SNS need a flag called ‘sufficiently decentralized’ that the NNS can manage. For example (unless the dragginz team has divested significantly I’d consider that to be a 'not sufficiently decentralized SNS. @bjoernek Would a canister controlled solely by a principal that had PoK and that was ‘bound’ to that controller be considered equivalent to a principal with PoK?

I prefer permissionless if possible.

1 Like

The main concern, as I understand it, is highlighted here:

There are three proposals cleverly concealed behind the enticing headline ‘let’s enable canisters to control neurons’, which is where the majority might stop reading. However, it goes on to suggest penalizing neurons controlled by canisters with 20% decrease in rewards (and even proposing mid-term penalties for new neurons controlled by II). Following this, there’s a vague mention of ‘introducing some PoK tech stuff’ that’s not clearly explained and likely eliminates the secure transfer of neurons at all.

1 Like

I read the proposal and while I acknowledge the voting power reduction I didn’t see anything that would prevent the transfer of neurons through II.

This would seem to protect existing neurons from the PoK penalty.

2 Likes

Yes that is correct!

Furthermore (since you asked about II controlled neurons), for newly created II controlled neurons, we had suggested the following: “Due to BLS reliance, these neurons cannot participate in the PoK scheme directly. An alternative might involve assigning a non-modifiable “disbursement key,” enhancing rewards while introducing a mechanism that complicates neuron transfer, thus indirectly bolstering non-transferability.”

Hence the idea is that also newly created II neurons, can receive full rewards, subject to usage of a disbursement key (or a similar mechanism).

2 Likes

As a thought exercise(I’ve made some assumptions to set a hard right edge of a gradient…this is not a realistic scenario, but is on a gradient that we are experiencing now with ID marketplaces):

A VC Neuron of 50,000,000 ICP and locked for 8 years votes with significant voting power for 2 years. Let’s say the rate of maturity is 14%(28% because locked for 8 years) for those two years to simplify the thought experiment. So the NNS(all of us) pay this VC neuron 28,000,000 ICP over two years assuming they don’t reinvest because, our assumption is, that they are securing the network by voting in the network’s best interests for at least the next 8 years(the deal they made with the NNS in forming the neuron with an 8 year lock).

After this two-year period, we discover that they have sold the ID securing their neuron to a third party for 40,000,000 ICP. As a net, they have profited 18,000,000 ICP. Further, we learn that the contract for this sale was signed before they created the neuron. Not only have they not been voting for the long-term health of the network that they signed up for, but they have also knowingly been voting for a specific horizon of 2 years. If they had only staked for 2 years they may still have profited by about 600,000 ICP. So the NNS overpaid for the security they actually provided by 17.4M ICP.

In any rational legal system, evidence that you signed a contract with party A and a conflicting contract with contract B would constitute a fairly straightforward conviction of fraud and hard damages of at least 17.4M ICP and potentially more if there is some evidence that the VC manipulated the price to be optimized when they liquidated at the cost of price further down the line. Additional damages would likely be recovered from the third party that enabled the fraud.

This is clear and explicit fraud if these were contracts. Of course, we don’t have contracts, we have code. And code is law. But I’d say that if your code enables this kind of clear common-law fraud you’re pretty doomed and the network has some responsibility to keep this from occurring. Can this behavior be dissuaded via code? Maybe? Does it make the system too complicated? Maybe? Would punitive damages work? It wouldn’t do much good to penalize the VC after the fact because they’ve liquidated their assets and likely moved them elsewhere. Can we dissuade it by being putative to anyone who enables the fraud against the NNS? Freezing, locking, or burning neurons that can be proved to have been connected to a sale would probably persuade most to not buy neurons. Probably not everyone, but certainly most people who want to de-risk neuron ownership. This punitive action is likely the left edge of a “what to do about it” gradient of action. So if you want to stay in the ‘do nothing to the code camp’ and want to have a network that doesn’t enable simple methods of fraud against the participants, then yes, penalizing individual neurons is likely the most direct way to keep this fraud from happening.

Now the participants in neuron marketplaces are not as far to the right as our VC in obvious fraud, but they are on the gradient. The sellers are committing a form of fraud against the rest of the NNS for their own gain and the people buying those neurons are enabling the fraud. We might say in some cases where someone gets cancer or has some harrowing emergency that we’ll give a humanitarian dispensation but it doesn’t eliminate the fact that if that neuron keeps voting after they know they’ll have to liquidate that they are defrauding the network of ICP that they are not due according to the ‘spirit of the code.’ Same with people who ‘always had the best intention of the network at heart’ but decided one day to sell. If they keep voting after that date then the fraud begins…

So what do we do about it? 1. let’s make that not possible or 2. change the contract or 3. change the form of security.

  1. Not possible: An ideal solution doesn’t involve punitive action but makes the fraud not possible. I don’t want punitive action and I don’t want a network that regularly has to do anything like freezing or confiscating funds. That leaves us with a requirement to change the code and behavior of our network. PoK for individuals and While List for known organizational neurons seems like a good path forward to me. I think the penalty for not being in one of these camps is not a 20% penalty, but maybe a more significant restriction like you can only vote on X topics(see 2.)

  2. Chage the contract. A few options: a) You can bail on ETH stake at any time because the security it provides is temporal. The thing you are voting on has no long-term effect. The epochs are either endorsed or not. The IC has a ton of things like this that we(have) voted on like exchange rates that expire after a few minutes, node reward payouts that once paid are paid…but we also have a ton of things that have no temporal end. If you vote to add a node to the network that node is going to be active for a long time. In order to endure the market effects of that decision may take a while. If the things we choose to vote on had terms and your stake was locked until those things’ terms expired, it would make people more selective in what they vote on. And if you only vote on short-term temporal things then you can exit in short order. This is unfortunately complicated and you lose the simple narrative of 8 years = 18%.

  3. Change the form of security: A much bigger discussion that should likely only be engaged if we find that the fundamental assumptions of the NNS begin to fail or create excessive risk.

I’m open to arguments that my thought exercise is flawed in some way, so if you see a flaw, please say something. Giving canisters the right to hold neurons is I think super important and I agree that we’re no under some immediate threat to network security, but over the long term I do believe these kinds of situations will occur(or others that we can’t think of now) and we should begin making plans now, if for no other reason then someone thinks they can fund an organization with X% rewards and then it all changes and the end up getting X-Y% that is unviable.

5 Likes

I think this is worth clarifying: The proposal suggests to incentivise non-transferability of newly created neurons, by allocating higher rewards to those with an enhanced binding between neuron and neuron controller. A transfer of a neuron without this enhanced binding would be possible (for example by selling a canister who controls the neuron, or by selling an II which has no disbursement key).

1 Like

Sorry, I’m confused. It seems you’re combining two issues in one statement (not to mention bundling three proposals into one). Does this mean all new neurons will be non-transferable by default? Or is there supposed to be a trade-off: less rewards for transferability, and more rewards for non-transferability?

This will work only if a smart contract can determine whether the neuron is transferrable by checking it.

No. Canister controlled neurons would be effectively transferable (because you can sell the controlling canister) and II-controlled neurons would still be effectively transferable (because you can sell the II). However, if you choose for a newly created II-controlled neuron to add a disbursement key (so that you get higher rewards), then it would become much less attractive to buy that neuron, as the seller would still know the disbursement key even after selling it.

Yes, there is supposed to be a trade-off. If you can proof to the protocol the non-transferability then you will get higher rewards, otherwise you will get lower rewards.

3 Likes

Will it be possible in the future to determine if a neuron possesses a disbursement key? Furthermore, once the feature is implemented and all three phases are completed, could the same ‘disbursement key’ optionally be applied to all neurons, not just the new ones, under II control?

To clarify that I understand correctly, does this statement also apply to neurons that were created and are managed by the canister? Will there be an opportunity for neurons created and controlled by the canister to specify a disbursement key in the neuron?

Thanks for contributing Aija…I’ll take a turn at addressing these aspecs

I agree that it should be dynamic. Not to bring up a sore subject, but regular confirmation of delegates(every X months you need to log in and confirm your following to continue passive voting) is a much better solution to this than letting people sell. If you die or become disinterested your influence is removed at regular intervals.

Of course, you don’t know the future, but you should be doing a risk calculation as to whether the reward is worth the commitment. Don’t stake for 8 years if your risk factor over that time period can’t endure it. Sometimes you’ll calculate wrong, but that is like any form of capital.

I disagree with most of point 2 as the system proportionally decreases your influence as you have less to gain and are closer to exit by design. I’m not sure why we want to subvert that fundamental assumption or why we would assume that the system would continue to work as designed with that subversion in place. Not being able to exit on your own terms is the whole point. You exit on the market’s terms after incorporating the decisions you helped make and that protects the system.

Either planning and logical thought have relevance to the future of the network or it doesn’t. If we think that pure randomness will lead us to the same results then the system is a waste of time. Just install a random beacon and be done. We’ll flip coins on proposals. Of course, our plans can be wrong and sometimes the opposite choice would have been better in hindsight, but we’re either governing or not.

This is a red herring - logical fallacy that aims to divert the argument to another issue that may seem related but is actually irrelevant to the topic at hand. No one is claiming that non-transferability is the sole thing securing the network. The fact that other issues have to be addressed and accounted for as well does not change the fact that transferability leads to a worse network.

This seems to ignore the fact that there is always a market to acquire Neurons by buying ICP and staking at your chosen risk level.

1 Like

Honestly, I would love to see a more realistic example than this one or the original. But here are a few random/non-so-serious thoughts:

  • The example mostly feels like a market failure. Why couldn’t the buyer purchase it originally, and has to instead purchase it later more expensive?
  • The NNS might (currently) overpay because there is no market to set the rewards; they are fixed. I think some people adjust the rewards based on staking percentage (not sure if this works).
  • It almost feels like an argument for having the possibility to lock a neuron forever (looking forward to -years gang). Otherwise, we are all making decisions based on “just” an 8-year timeframe and that feels wrong (?).

We could implement straightforward constraints, such as limiting the total number of neurons transferred to only 1% of the daily quota. If this threshold is exceeded, the excess will be deferred to the following day.