Using Randomly Assigned Neurons to Filter for Non-Actionable Proposals ***VOID***

@wpb and @wpb

Alright so to solve both constant spamming and also to solve unequal reward distribution between large neurons and smaller neurons I have thought of two things.

first to tackle neuron proliferation,
as WPB noticed, time locking neurons with the reward multiplier. By time locking neurons for a specific amount of time with a lower probability of being chosen again (if they are chosen they gain no extra rewards). This essentially removes that neuron from the available pool. Meaning that neuron cannot accumulate rewards reducing the incentive to spam or split neurons. Now a large neuron could split itself to try and have a higher chance of being chosen, however, due to randomization they pose the risk of only having 1 of their split neurons be rewarded. This makes it beneficial to keep neurons as large as possible so you get the maximum rewards when chosen.

Second,
To try and solve reward distribution.
I suggest we implement a sliding scale for both rewards and for how neurons are selected.

first we break down all available neurons into 4 groups based on voting power. each group will be a range of neurons. Group 1 contains neurons with 0 -100 voting power, Group 2 contains neurons with 100 to 200 voting power, group 3 contains neurons with 200 - 300 voting power, ect

then, we assign each group a number of seats based on the population of the group. example group 1 has 100,00 neurons they get 4 seats, group two has 50,000 neurons they get 3 seats, and so on. Im assuming there are less large neurons then small neurons. We want to make sure that the larger your neuron is the higher chance you have of getting rewards but the lower the rewards are.

When we select neurons to be on the jury we randomly select from the available seats for each group. Remember neurons that already voted have a lower probability to be selected again giving everyone equal chance to gain rewards. Also if neurons are inactive they are replaced by active trusted members of each group. meaning the more you participate the higher the likely hood you can max rewards for the year.

This method can also be applied to reward distribution, where the larger your neuron the less the monthly reward multiplier is to ensure everyone is earning around the same amount of rewards. If the large neurons try to split up to increase their chances of being on selected this will actually hurt them since it will A actually decrease their chances due to limited seats/larger population and be if they are selected they will only receive partial rewards.

The issue that remains is individuals who will automate their voting and never actually manually choose yes or no. I have not found a way to solve this yet. depending on how many individuals try to automate their voting this could be a problem. Maybe a captcha LOL.

another issue I see is neuron burn - If enough proposals are spammed you could essentially lock all neurons with monthly rewards ensuring you get the maximum rewards but im hoping the cost of doing that will dissuade people from trying.

Hope this makes sense.

I like this proposal - several of the points are really enlightening. I especially like how pragmatic the process is and the structure for rewarding reviewers.

Can you say something about ‘trusted neurons’. How do they achieve this status? Can it be revoked? Maybe we could extend the idea of known neurons to incorporate some sense that they are very likely to act in an ethical manner.

What does ‘removed’ mean? If it is not the same as ‘rejecting the motion’ it could also have different financial outcomes. Perhaps a rejected proposal loses 50% of the fee, but a removed proposal loses 100%… As an aside, many socioethical issues are divisive and an all-or-nothing reward cannot capture this.

Despite my enthusiasm for this proposal, which I would probably vote in favour of, I still have the uncomfortable feeling that we are reviewing everything because a bad thing might happen, which feels inefficient and a little paranoid. See my topic on governance inhibition for more detail on why I think this. Additionally, when only a select group can view a proposal before it is voted on, it also feels like we are giving up a part of the decentralisation dream, before it even had a chance to get going. In some ways, the spammers have already won.

1 Like

@willguest
Hello, i have not really developed a structure or formula for how trusted neurons can be identified and was hoping the community could help fill in the gaps. It could be algorithmic, based on voting behavior. Or it could be by having user reveal their neurons thus achieving trust status. Im not sure at this time.

As for the removed part. This system is intended to review proposals for a specific standard set by the community. For example it must have a clear objective or it must readable. It just has to meet basic requirements to pass. If a proposal does not meet basic requirements that define a proposal it should be removed. The reviewers are not determining if they agree or disagree with the content of the proposal. This a basic spam/quality assurance filter with error catches for unwanted human behaviors.

I agree, over planning and designing for bad actor behavior is not effecient. However, we must try at least some measured effort to deter this behavior. Its like a lock on your bicycle. Its more of a deterrent then a full solution. Your trying to make it so stealing your bike is just not worth the reward. So that the robbers would have to invest more then the actual reward of stealing your bike. However locking your bike in a bank safe is overkill.

As far as i can see people parties are the only real solution at this time. Once they are active this system blends in perfectly with the people party system. And can remain functional with minimal tweaks.

Here is a video I made explaining the concept. I did it quickly so its kind of boring.

https://dwqte-viaaa-aaaai-qaufq-cai.ic0.app/2688-1528/proposal-action-potential-for-reviewing-proposals

Thanks a lot for the effort @MrPink13

1 Like

Here is, IMHO, the solution to my questions @MrPink13 :

Since the reviewers must have to be incorruptible, we can incentivize a genuine review by :

1 - rewarding the randomly assigned neurons with an important amount of ICP (obviously at least higher than the Avg Daily Voting Rewards), to prevent them to be tented by the choice of letting go the spam to the NNS and earning the voting rewards.

2 - set a second and last layer of reviewers : the reviewer of the reviewers. These last ones would judge retrospectively if the proposal was a spam or not, if the first layer of reviewers filtered/not filtered genuinely. If they did not, the second layer tell it, and the insincere reviewers are individually identified as having let pass a spam proposal. Such a reviewer, identified as insincere, would not asked to be reviewer again (in you system well thought) before an certain amount of time. The more such a neuron would be identified as insincere, the more he would not be eligible to be reviewer again. This second and last layer of reviewers would not be rewarded for their review of the reviewers.

Conclusion : In my opinion, the best solution is the crossing of the 2 aspects, because :

  • the higher reward given by the act of reviewing will incentivize average the investors to review genuinely and not be dismissed of being eligible to such rewards
  • it will incentivized the biggest investors who could want to create spam or let pass spam (as reviewers) to not succeed and be, in addition, deprived of eligibility, and by consequence of the rewards he would have earned as future reviewer.

Indeed, even in the case where a huge whale would be a reviewer and would want to let pass a spam proposal for the rewards he would get thanks to the voting to come after, it would stay as a minority, so it would not overwhelm the other average reviewers, and the whale would be retrospectively judged as insincere by the second layer and would not have again the occasion of trying to push a spam.

As you understood, this solution relies on and takes advantage of the Avg Daily Voting Rewards as a minimum (currently 1.30 icp) in association with a progressive catharsis of the biggest bag who could try to multiply the spam or let the spam pass :

The average investors are here the engine and the brake of the reviewing. There will always be such an averagenes (only numbers will change), so we can rely on it.

So to recap, your solution, PLUS this add-on is exhaustively operative. The people parties implementation will be the game changer.

I would love hear @jwiegley’ and @dominicwilliams’s opinion about this.

1 Like

So now we are going round twice, on everything, just to make sure we watch the people who watch the watchers. But who’s going to watch them?

I like the level of detail that has gone in to this idea. It has considered many different situations and has a fairly good response to most of them. I think it is lacking in terms of scalability.

@MrPink13, in his video, showed how the aggregation of reviewers could be assigned, showing how this can be replicated in a fractal manner as the network increases. I would point out “scalable” does not mean a solution that scales (linearly) with the size of the network, but rather one that can handle a growing network, without become increasingly burdensome.

Here, you are suggesting that, for every proposal, there is now a second layer of reviews. From a computational perspective, this is like a multiplication of the amount of work that needs to be done. This factor will also be present in the linearly scaled model, making the first problem worse.

“The bureaucracy is expanding to meet the needs of the expanding bureaucracy” :laughing:

As I explained, a second and LAST layer. There is no need of a third, since they are not paid to make a review of the reviewer : their reward are their future position as layers of the first grade, with the rewards they will get at this moment.

It is very common to have a review of a review : in the society with cops, in education, in science with pair review of a pair review, in meta analysis, in medicine, everywhere basically. So, see a regressus ad infinitum within a layer 2 only is not accurate, since it is common. So much common that the blockchain ecosystem is full of the concept “layer 2”. So, not so strange…

To speak about “twice about everything” just because I speak about a review of a review is exaggerated : you make a passage to the limit, inaccurately since the 2 layers don’t have the same attributes.

But I can understand that computationally, it is hard to set. But the difficulty to set it computationally does not prove that it is not a solution.

Since you look like loving adages, one commonly says “tout ce qui est excessif est insignifiant” : “everything being excessive is meaningless” :laughing:

I think the system you proposed is too complex, a better solution would be to not show other people’s votes on a proposal and only give reward if the reviewer’s choice turns out to be aligned with the majority after the voting period ends.

1 Like

I understand ! Thank you for your feedback ! :pray:

I have not explained myself well, and deserve to be out-adaged for that.

My question - who watches them? - really is about trust. Many of the reviewer-reviews you mention exist also because trust is required. Until now we didn’t have a trustless element on which we could rely, and so I only take these parallels with other industries so far.

The connection I am trying to make is that, now we can build on the foundations of a trustless system, why reintroduce trust (and its baggage) back into every proposal? This is exacerbated every time a new element is added, making it increasingly burdensome.

I agree that we need review, I see that reviews need trust and I think this retrospective assessment of spam is a fantastic idea. It provides a really powerful feedback loop that fits neatly within a smart-contract architecture, as reviews can be kept on their own ledger.

Your answer moves fascinating topics, and I have thought about this a lot lastly (not computationally), and I don’t have an answer yet of course. Sadly. Just as you formulate it, I was wondering above :

I agree with you : we must avoid trust. I don’t want to reintroduce trust, this is why I was promoting “greediness” for the average investor and reviewer as ‘engine’ : greediness of the reviewers of the first layer of review, and greediness TO COME of the second layer of review. But I would love a more formal and computational discrimination.

To find a incentivize to review genuinely without having to base us on the trust is a wonderful topic. Let us find this.

I have an idea for addressing this:

It still has the review concept, and deliberately doesn’t try to say how review should be done. Instead of “default hidden until reviewed”, it works on a “default visible until flagged as problematic” logic.

I can see how the ideas in this topic follow on from the ones described there; the arrival at a review stage could be done following a spam reporting mechanism that is decoupled from governance incentives.

An interesting view, well structured and explained. I think there are 2 different matters to be considered:

  1. SPAM - In a anonymous decentralised coupled with a low proposal barrier creates a nothing at stake problem, which means anyone can abuse the system for a very low cost.
    A peer reviewer system introduces complications and does not really address the fundamental issue as reviewers can also suffer the same spam levels. further more it means they must be active on daily bases, as there is no notification system in place.
    Simply increasing the price per proposal will not stop bigger ICP holders

  2. Quality of proposals - this is a matter of competence and knowledge and difficult to address when anyone can submit proposals. Nor the knowledge level of the proposer or the reviewer can be determined.

For the 2 points above I do not consider a peer review system based on these type of incentives are an effective long term solution for a governance system.

I red your interesting and rigorous proposal. But as you guessed, my concern with your well designed proposal is that eventually, you trust the fact that people can think about the network wellness before how much they get paid. Some are capable of, but the majority will think with a greedy mind (I don’t tell it pejoratively), and we can’t base us on a such passionate state of mind.

You could say “but the wellness of the network will bring big money”. I agree with you, but the majority of investors don’t or can’t think like this, because we have a lot of shorters, some retired people etc.

So i do love your proposal, but one of your statements is, indirectly, that the majority of investors is not greedy. But they are. And they want reward by reward, day by day, or at least week by week. So you understand know why I was pursuing a model where everybody was rewarded enough to not be corrupted.

If this is how it came across, then I wrote it badly. I mean no such thing. Even if the majority of people are greedy, spam reporting can still happen. The reporting does not need a majority to be successful (think whistleblowers).

Despite this being a somewhat one-dimensional view of people, I think you are pretty much correct, which is depressing. However, I would like to let the majority be greedy, while also finding a way for those who care about the network to have an impact.

1 Like

@Roman
Apologies for the late response.

I agree with @willguest, another review group layer falls prey to the same issue. As you said your self it could continue on ad-infinitum, who reviews the reviewers who review the reviewers and how do we make sure rewards we give actually reward quality reviews. I do think your initial statement holds a lot of weight however. The true end goal of any of these concepts is that proposals are reviewed accurately without the need for too many layers or mechanics, we definitely need this. soo with that in mind.

How about if during the voting phase we also just have each neuron make a comment about their decisions and after voting is done each neuron rates all other neurons comment responses. Kind of like yelp reviews but with randomization. this can also work to build a neurons trust status so they get chosen more often and Since neurons are chosen randomly it would be hard to game this system. I could see that solving your infinite review dilemma. So within the same group, we are now not only reviewing the proposal for accuracy but at the end we review each neurons decisions for accuracy by using comments as an identifier for bots or individuals who are not really participating. The get pushed down the totem pole and the good neurons go up the totem pole. You dont know who is who so you wont go out of your way to help a stranger if they wont help you. social dynamics for the win.

@Zane I also really like the blind voting idea, like that is a really good idea. Its so simple yet so powerful.

now combine everything we have so far and holy shit this is very complicated but I have no doubt it will stop spam proposals or at the very least heavily mitigate them lol.

I’ll admit that I’m confused at this point what is the actual proposal. Do you plan to re-write it so the full scope can be understood in one post? My recommendation is that you edit the first post to reflect the latest and greatest version of the proposal so it is the first thing that people see. It’s a good idea to also make a new comment that you have made an edit to the proposal. My recommendation is to revise the proposal at least a few days before you plan to submit to the NNS so people know exactly what you plan to submit.

haha. There are just to many good ideas now, my short attention span kicks in and I really don’t want to plagiarize either by cherry picking all the good stuff. Yeah, Ill have to revise with a final version.

1 Like

We don’t have a regressus, thanks to the core of my solution : rewards at least equal to the average daily voting rewards.

The core of my solution : reward much better the reviewers of the first layer, by taking at least the average daily voting rewards as index. So the first layer is already incentivized to vote genuinely.

Once this main part is done, the addition of the second layer also composed mainly of average investors waiting for their own opportunity to join the first layer and get themselves such higher rewards. The feedback is less to make a review of EACH reviewer of the first layer, because statistically the average investors will vote the same thing, the second layer is more to spot a light on the rare reviewers who will have reviewed differently than the average investors, just for better rewards, to prevent them to keep being eligible and eventually join a pool of first layer reviewers where they would be in majority.

The key of my solution is the average. Even without a second layer of reviewer. The second layer is just to clean the whole process eventually. It allows to make little by little disappear those who would prefer the spam to the genuine vote, because they have a huge amount of ICP and préfère to be insincere.

So don’t forget the main part that : better rewards, since the beginning : more rewards for the reviewers.

But nevermind, focus on your proposal, thanks for having considered my point anyway :pray:!

I will write my own, later, because mine allows to spot the insincere people : those who won’t be incentivized enough with an average higher reward and will choose to let pass spam.

In your solution, I don’t see why anybody would not let pass spam, especially the insincere whale. I see why in mine : the average investor won’t take the risk to be insincere and don’t have any reason since he is enough « paid » to make a sincere review, and the not enough incentivized ones who will prefer that the spam pass will be « disabled » by average ones.

Again, the key is the higher retribution, enough to make the average of investors incorruptible. I don’t see such a mechanism in your solution : no reason to make average investor incorruptible, in mine : he wants to keep earning such rewards so he won’t take a chance of being thrown away of the future rounds. So, Immediately and mediately, it is not this interest to be insincere. In yours, he can be insincere without taking any risk. In mine the risk is a bad review, and the risk does nor worth it, because the guy is already well paid.

In yours the rewards are multiplied, independently of what he votes. In mine, he get better rewards, but if he does not pay attention on what he votes, it could be his last ones for a while or forever eventually.

If i write it a day or another, I d be happy to give you a draft on my proposal when I will have written it. Until there, don’t worry about me, focus on yours.