NNS quadratic voting with maturity reward counterbalance

This is a rough initial idea for how we might implement quadratic voting on the NNS, please let me know what you think!

Large Neurons Have Too Much Power

Lately, the community has been concerned regarding DFINITY’s influence over the NNS. Their control has been necessary to kick-off the NNS in the early days, and it is still needed to do a lot of the general network administration. However, at some point (likely soon) we need voting power to be more distributed across all neurons to truly decentralize the NNS.

However, even if the current state of the NNS was much more decentralized, that doesn’t solve the fundamental flaw in it’s design. As long as voting power increases linearly per economic unit (which in this case is ICP), there will always be a threat that a very rich and powerful entity (such as a corporation or government) could buy enough influence to gain control over the network.

Power and wealth tend to naturally consolidate if left to themselves, especially when in environments of anonymity like the NNS. A bad actor could control thousands of different neurons without anyone even knowing it.

This is an inherently tricky problem to solve. Governance power should be liquid enough to stay accessible to anyone who wants to participate, yet it should not so liquid such that the outcome of decisions can be outright purchased by a single party. Sure, the ICP supply can be mostly locked into Neurons, but ultimately this would only slow down the quiet and subtle accumulation of a large bad actor.

Quadratic Voting Will Help, If We Can Solve The Obvious Flaw

In a nutshell, quadratic voting would exponentially decrease the incremental voting power of a neuron as the stake is increased. Large neurons would still have more voting power than smaller neurons, but since it’s not a linear relationship it would be exponentially more expensive for a bad actor to increase their sway over the NNS as a whole. You can read more about quadratic voting as a general concept here: Quadratic voting - Wikipedia

The obvious flaw is that a whale could simply create a huge number of small neurons and then have all of those follow the votes of their main neuron. We would never be able to 100% prevent a person from making a lot of small neurons without removing the privacy or freedoms that are currently part of the NNS. Unless a change is made to the NNS incentive structure to strongly discourage the creation of masses of small neurons, quadratic voting would be too easy to circumvent and ultimately have no benefit.

Proof of Humanity Will Not Solve This

Personally, I don’t think proof of humanity will ever be possible in a completely binary sense, such that a system could be 100% certain that commands are being sent from either a bot or a human. At least over the next few decades (until perhaps brain-machine interfaces exist), it’s more likely we’d only be able to have a confidence level of humanity. For example, at best a tool might help us know if there’s a 99.91% chance that an interaction is coming from a human.

Even if an incredibly reliable proof of humanity technology did exist, there’s still a wide range of issues which would make it unwise to rest the security of the NNS upon it:

  • What if the technology wasn’t accessible to everyone? This would introduce an unfair barrier to participation.
  • What if it had a bug, or failed due to an exploit?
  • How could we detect if someone transferred ownership or control of the account? Or gave someone their login? Or got hacked? If so, how long could it take to detect this? How could we prevent it, without removing people’s freedoms to trade assets they own?
  • What if an AI (or another technology) was suddenly invented which could break the Proof of Humanity test? Or the Proof of Humanity test used by the tool suddenly became otherwise obsolete?
  • What would we do with the current neurons, especially since not all of them might opt to validate their humanity?

Even if you can envision a future where a technology exists which could be a 100% safe Proof of Humanity platform to entrust with the security of the NNS, the odds are that we’ll need to solve important issues with the NNS long before this tech would become available. Even if it was available today, it would still need a long track record of success before we should trust it for the NNS.

Inverse Maturity Rewards: Separation of motivations

I believe the best solution would be one built within the protocol incentive structure itself which did not rely on any Proof of Humanity.

For this, I would suggest adjusting the maturity rewards to counterbalance the impact of quadratic voting:

  • Small neurons would have the best ICP-to-influence ratio due to quadratic voting, but they would earn maturity at a lower rate (perhaps 20%-75% less) than the current maturity rate.
  • Large neurons would have a poor ICP-to-influence ratio (though still more total voting power) due to quadratic voting, but they would earn maturity at a much better rate which would essentially match the current maturity rate all NNS neurons earn right now.
  • If a neuron has followers, then all its own ICP and the ICP of the following neurons is combined to determine the ICP-to-influence ratio and maturity rate that they will all share (both the neuron being followed and all the neurons following it).

This means the maturity rate would essentially be unchanged in any significant amount for both large neurons and small neurons which follow large neurons. However, it would be reduced up to a set maximum amount (perhaps 20%-75% less) if a very small neuron made a manual vote or followed another small neuron (since the combine ICP total would still be small).

In terms of voting power, this would significantly reduce the influence of large neurons which have a lot of followers, and make it harder for them to overpower large groups of smaller neurons with less followers. It would also make it much more likely that small neuron holders would be able to win against a rich cohort of large neurons if they all felt passionate about a specific proposal and decided to sacrifice a reduction in earned maturity for that one proposal by voting on it manually, to maximize their influence. This might come in handy if a proposal was ever introduced which benefited the rich at the expense of the poor (which will inevitably happen).

One positive governance idea used during the formation of the USA was the intentional separation of powers. These maturity adjustments would be similar, in the sense that it would enforce the separation of motivations by separating the maximization of influence from the maximization of gains.

A bad actor would be force to either:

  • maximize their influence by creating a large collection of small neurons which only vote manually, which would tie up a large amount of funds for years with a poor rate of return (lower than the majority of the neurons they are outvoting), since the maturity would be reduced.
  • maximizing their gains from maturity by creating one large neuron or following a large neuron, at the expense of a significant amount of voting power.

As you can tell, I’m not yet sure myself what the max maturity reduction would be or the exact function which would determine it’s value (from the lowest level to the current maturity rate) as a neuron increases in value.

Since most NNS users vote by following other neurons, the maturity would only be impacted if someone manually voted because they felt especially passionate about the outcome of a particular proposal, or if a rich bad actor was trying to game the system by getting around quadratic voting. In this light, it could make sense to take things to the extreme by having the maturity to range from 0% (for a 1 ICP neuron) to the current maturity rate (for the largest neuron) with it being half way between those two values if a neuron matched the size of the median neuron on the NNS. I’m not saying that this should be the way it works, just making a point that the implementation of the approach could vary widely and needs strong thought.

DFINITY foundation funding

One good aspect of this approach is that it would reduce the power of DFINITY without requiring significant changes to their existing neurons or taking away their future revenue. I assume they ultimately plan to cover the cost of operations using the maturity of their neurons. Forcing DFINITY to airdrop their ICP and/or Neuron Stake would introduce a wide range of issues, but I see no problem with the inventors of the protocol having a good long term strategy for funding R&D operations. If we can find a way for them to keep their maturity without being too powerful or hurting the ecosystem, then that would be ideal.

However, without something like quadratic voting, then reducing DFINITY’s influence would require finding a way to distribute the IC assets they control now, which seems like it would be messy probably cause more problems than it solves.

Problem with this solution: sleeping giants

This approach mentioned above isn’t complete, because it still doesn’t address one more scenario:

  • A bad actor creates thousands of small neurons and has them follow a large neuron until a specific proposal comes up, and then they make all their neurons manually vote and force whatever outcome they want.

There are possible approaches to address this, but they all inherently also penalize people who can only afford small neurons (it’s tough to separate the two since the NNS can’t tell the difference):

** (These are just rough ideas for possible options. I am not proposing that all of these should be implemented or even in favor of one in particular. I’m just making the point that there could be a way to address this problem, and it’s worth exploring further to determine a good solution.) **

  • Having small neurons keep the same maturity decrease even when following other neurons. This would penalize/deter a “sleeping giant” bad actor, but it would also reduce the benefits of following another neuron and reduce gains for people who cannot afford to make a big neuron.

  • Adding a fee to create a new neuron (which is burned). This might deter a bad actor on the onset, but it wouldn’t have any ongoing impact after that. I also don’t like the idea of the NNS burning a user’s assets for no good reason other than to stop other people who might be trying to do something bad, this seems unfair. I’m not a fan of this options but it was brought up in another topic so I thought it should be included.

  • Adding time delays. Perhaps when following a neuron it could take a long time (perhaps months) for the maturity to increase to the max, but it would immediately drop back down if you unfollow a neuron. Taking that a step further, you could even do the inverse in terms of the ICP-to-influence ratio (decreased influence immediately when following and increasing to max slowly after unfollowing) and then provide an option to burn some stake (perhaps 5%-20%) to manually vote on a specific proposal with the max amount of influence (without needing to wait for the ICP-to-influence ration to increase after recently unfollowing a large neuron). This would hurt both whales trying to game the system on a specific vote, and large groups of small neuron holders trying to passionately fight the whales on a proposal. The two are hard to separate since there’s no way for the protocol to tell the difference.

  • Proposal-level Protection. If we assume that a bad actor’s goal is most likely going to be voting “yes” on a proposal to make changes on the NNS and small neuron holders trying to prevent something bad would be most likely be voting “no” on a dangerous proposal, then the when an unusual (statistically divergent from history) amount of manual votes from small neurons suddenly vote “yes” on a proposal, perhaps this situation would trigger a safety measure that delays or perhaps even rejects the proposal.

  • Whale-phishing Proposals. Alright, now I’m going to go “out there” in terms of crazier ideas. What if there was a special type of proposal which the community could use to trick bad actors into outing themselves? Someone with suspicions of collusion on the IC could create a “whale-phishing” proposal which would clearly benefit a rich cohort or specific bad actor (at the expense of others or the protocol itself) if it passed, and this type of proposal would never actually execute but on the front end it would look just like any other normal proposal. If the NNS passes the whale-phishing proposal, then instead of executing the contents the protocol triggers an “evaluation proposal” which can only be manually voted on by neurons which voted “no” on the preceding whale-phishing proposal. The evaluation proposal basically asks if the neurons which voted “yes” represent a valid threat to the NNS. If the evaluation proposal resolves that there was indeed collusion, a penalty (perhaps 10%-40%) will get burned out of the stake of all neurons which voted yes on the whale-phishing proposal. Basically, this would enable NNS “sting operations” from the community to catch colluding neurons working against the best interests of the protocol. Even if a bad actor successfully used this mechanism against innocent NNS users, that event itself would alert the ecosystem to the fact that the NNS had been compromised (and probably lead to a fork of the IC as a result).

Closing thoughts

I know this approach is not fully defined or refined, but I hope this inspires the community to take up the torch for finding a way to bring quadratic voting to the NNS. While quadratic voting alone is not a comprehensive solution for solving all problems with the NNS, I feel strongly that it is a critical step which needs to be taken to protect the long term future of the NNS.

4 Likes

As you say, “the obvious flaw is that a whale could simply create a huge number of small neurons and then have all of those follow the votes of their main neuron.” It is over.

The only way to decentralize DFINITY and thus ICP is the NNS election of DFINITY.

Please see:

1 Like

I don’t see how this solves anything, the reasoning is circular. You can’t make the NNS more decentralized by using the NNS to hire and fire people who control the majority of of voting power on the NNS.

Also, I don’t see how a crowd of anonymous voters is a good HR platform for technical talent.

4 Likes

As much as I’d love to see QF in the NNS, I don’t think this is the solution, it can be gamed easily even with the proposed mitigations, which would impact legit users more than whales trying to manipulate the system. I stand by the idea without PoH there will always be ways to cheat the system.

1 Like

“You can’t make the NNS more decentralized by using the NNS to hire and fire people who control the majority of of voting power on the NNS.”
Please see:
“Imagine that one day someone will secretly collect 51% of ICP voting power and thus will fully control the NNS voting without anyone knowing it. But if before this, DFINITY democratically elected by NNS can have 51% of ICP voting power, then no one will be able to control the NNS voting secretly.”

“I don’t see how a crowd of anonymous voters is a good HR platform for technical talent.”
So, it seems that you simply consider the IC community to be just “a crowd of anonymous voters”.

1 Like

I don’t see how electing a centralized entity to rule over the network improves NNS decentralization, it’s a terrible idea and would make ICP the laughing stock of crypto.

1 Like

Could you explain it to me in more detail?

Most legit users follow other neurons, so the actual day-to-day impact should be minimal.

Even if perfect POH was available, it’s not a solution in itself, but rather a tool for making sure things like quadratic voting could be implemented.

However, I feel like I made a strong case against waiting around for POH solution and then entrusting everything with it.

While POH is a great idea as a general concept, perfect POH is a pipe dream and imperfect POH isn’t good enough for making the NNS secure. It’s like saying that being able to turn air into food would solve world hunger, you might be right but waiting around for that to happen is still a bad plan if you are hungry right now.

How do you see POH being able to deliver a solution to this issue in the real world?

With the NNS election of DFINITY controlling a sufficiently large voting power, you may be able to control the system, but you will never be able to “cheat” the system, i.e. control the system secretly, because the voting result of DFINITY is public. Then if the DFINITY is alway voting against you, maybe you should just leave.

If you want to discuss your idea please do so in the thread of your post. I’d prefer to use this thread to discuss quadratic voting.

Obviously, I am against quadratic voting. It just makes the NNS voting mechanism uglier. The NNS election of DFINITY is just a solution to you problem.

Reduced APR + some of the mitigation mechanisms you mentioned would impact small stakers regardless of automatic following, e.g fee to create a neuron, reduced APR if following other neurons, etc…

You’d verify your internet identity every couple months or so, then all the neurons owned by that identity would count as 1 individual when voting. I thought about PoH for a couple months cause it’d solve many issues and make many use cases possible in the crypto space and so far I figured there are 2 ways to do it:

  • People Parties approach: decentralized, anonymous but not very user friendly and potentially abusable.
  • Integration with government issued digital identities: centralized and not available to everyone (not every country has Digital ID) but less likely to be abused.

As you say we still miss a perfect solution, but instead of giving up on it and creating needlessly complex systems as a band aid, we should spend some time and evaluate if there are other PoH approaches.
The community seems to heavily dislike people parties so they are barely talked about and I’m not sure if that’s cause: it was a Dom’s proposal, they already know its design is flawed or ignore the benefits they could bring to the ecosystem.

1 Like

Just to be clear, I wasn’t proposing that all of these should be included, only including possible options (of which probably only one would be needed). I actually don’t like the fee one, and the whale-phishing was mainly me speculating for fun rather than offering a serious solution.

Anytime you get the opportunity to use a term like “whale-phishing”, you need take advantage of it!

Quadratic voting can never make sense in the crypto world, since anonymity is the first principle.

PoH may help quadratic voting, but if PoH can be fully realized, then quadratic voting becomes unnecessary, I mean, with PoH we can simple realize “one person, one vote”.

That sounds super annoying, what happens if you fall behind and forget to verifying your humanity for a few months? Does your neuron stop voting?

It stops counting as an individual for QF, but it still votes.

How would that impact the voting power?

Buterin would disagree

1 Like

Buterin would agree with “CP” :joy: :joy: :joy:

The neuron’s VP wouldn’t be affected, but the VP calculation based on the QF formula would differ.