I think we all want the best work to be done, so you aren’t saying anything new there. I also want the grants to go to the best teams. So why don’t we just wait and see what the teams are in July? And no one is striving for bare min. We are all striving for excellence in the IC because that is the only way it even exists.
It’s too easy to say this. It needs to be demonstrated.
It’s not demonstrated by failing to run the code you’re reviewing (nor striving to progressively build out a test suite for it, even after many months), and it’s not demonstrated by keeping fund management, VP and neuron control centralised (controlled by one person - CodeGov is NOT a real team neuron). The amount of funding being allocated to this demands something much better.
Voters don’t know what they’re getting unless someone is willing to discuss this openly.
Wenzel is just doing it the way the foundation has been doing it since Genesis. It follows the exact same process. Wenzel is a bit old school in that way. Battle proven. Then again, it has only been 4 years since genesis. Your experiment in Gov is nice but yet to be proven.
Also don’t forget to add ALL the members.
What way is that - the foundation transfers ICP to an address? CO.DELTA is a recipient of the same grants programme.
Where’s the experiment (what needs proving)? In our case the address belongs to a canister, and that canister is controlled by consensus. The grants programme is agnostic to the infrastructure managing the received funds.
CodeGov’s problem is that one person maintains the power to relay those funds as they see fit (not to mention unilaterally override the CodeGov team vote). That’s not what any of us are here for. You’re payed, replaceable, and overruled at that one person’s discretion (someone who rewrites their own rules as they go, as they see fit). They hold your purse string on an ongoing basis, so it’s no wonder you feel motivated to come to their defence.
The CO.DELTA members who currently have theshold voting rights are those who are currently recipients of funds from the grants programme. Don’t worry, we’ve not forgotten what we’re doing here
This is going off topic. There’s no excuse for not doing a proper job (else bugs will just keep slipping through the net, and someday maybe something worse). This is important. Stop passing the puck and take some responsibility if you’re going keep this going.
Firstly, the “purse-strings” suggestion is a joke. If you really think $4 000 a month—even after Uncle Sam takes his cut—could buy my voice, you’re not a true champion of freedom. Freedom of thought and expression is foundational, especially in crypto. Trust me: if I ever disagreed with Wenzel, I’d say so.
What you fail to understand is that everything already relies on social consensus. People can unfollow CodeGov today, tomorrow, or whenever—which immediately diminishes its influence, just as kicking someone off a team removes their power. The same dynamics will play out in co.delta. You’ll likely mount campaigns against members you don’t like—just as you tried in CodeGov—because it all comes down to social consensus.
And yes I understand your little program. Where is the neuron stored?
Also you might want to increase your test coverage.
You have an oversimplified view. Most followers don’t keep tabs on the neuron they’re following, other than that it votes. In any case, you’re off topic. This is about how limited funding is being utilised (or not utilised).
What are you talking about? In any case, any social consensus you think there is in CodeGov is fragile and artificial. It exists at Wenzel’s discretion. He’s “willing to listen”, but don’t go “mistaking that for decentralized management of CodeGov” (Wenzel’s own words).
Are you asking me to explain how the NNS Governance stores neurons, or how canister controlled neurons work? I’ve cross-posted your question to the appropriate thread and answered your question there.
To summarise the salient points in this thread. CodeGov does not locally run the high impact Protocol Canister Management code that they’re paid to review on an ongoing basis (significant sums of ICP). There has been no attempt to build out any sort of test suite. CodeGov was elected into this position by the community. Concerns that have been raised in this thread have been met by abusive and dismissive messages.
The IC community can do so much better. It needs to do better. It will do better. A re-election cannot come soon enough.
You haven’t even built a test-suite for you own 10 line program. I don’t speak for @ZackDS but there were things done to him to try and ruin his life, and in many cases succeeded. So it is more personal for him than me. For me it is just about pointing out your own Hypocrisy.
Jefri, this is getting so far off track. The community needs to be able to raise concerns about NNS governance without being met with these sorts of responses. They’re either rude or completely off point.
That sounds bad. I have no idea what you’re talking about though or how it’s related. I’m tempted to ask what things, and by who, but I think the thread is already painfully off topic.
I agree that the community should be able to raise concerns, but so far, I’ve only heard from you. If you’re the entire community, then I must be in a bad dream. The project is on track. You highlighted oversights by Dfinity and CodeGov, and now I’m pointing out yours. You claim to prioritize testing, yet you don’t even test your own code.
Maybe you should reach out to @ZackDS and tone down the rhetoric on CodeGov.
Of course, and no, I’m not highlighting oversights by DFINITY. DFINITY’s doing a great job. It’s not the occasional presence of bugs that’s the issue. I’ve commented on that in the past.
DFINITY has historically also been great at publishing post-mortems and retrospectives. I’d be interested to know what measures are being taken to avoid similar bugs in the future (I believe this is also a responsibility that falls to paid NNS governance reviewers, and the responses on this thread have so far fallen incredibly short of the mark).
I believe it’s a mistake to rely solely on paid NNS governance members for decision-making. This responsibility should fall to anyone who cares enough to contribute. I’ve dedicated a significant amount of my time to the IC as an unpaid volunteer, driven by passion. Expecting payment for something you claim to care about raises questions about your commitment. In my view, genuine care motivates action without the expectation of compensation, but everyone has their own perspective.
This is great! So have I and so do I on an ongoing basis, as well as many other people. I hope you’ll continue to peruse code upgrades at your leisure in the future.
I agree, I don’t think this changes what’s being discussed though does it?
The amount of effort that’s required to do a diligent enough job does require incentive. Standard voting incentives don’t work (given that it doesn’t matter which button you click), and well-documented diffusion of responsibility results in the long-term stake implications (for protecting the network) not working very effectively either (for the vast majority of voters at least).
Hi all - a few people have asked about why this wasn’t caught during testing. This is a great question, and of course one that immediately came up on the team as well.
The basic answer is that there wasn’t an existing test in place and the same mistake that led to the error also resulted in not having a test.
The data migration got missed due to human error. Existing tests couldn’t have caught something about the new field that was introduced (because it wasn’t there). New tests ensured the new field was populated and relied upon in every case moving forward.
But because the data migration was missed, the need to test the migration was also missed.
So of course then we asked:
- How do we make it less likely we forget things in the future?
- Is there any useful test that could catch something without that something being well-defined?
- How could we make the eventually-inevitable errors that do happen less impactful?
We started addressing these internally and have other avenues we’re exploring, but so far we have agreed on the following responses to the above questions:
- To address the first point, we are creating checklists to make sure we remember to ask all the right questions when planning out the work as well as doing reviews.
- We thought of a test we could do against metrics on canisters with data to simply check for large deviations before and after upgrades in our state machine tess. This will likely catch large anomolies (such as many proposals getting deleted).
- We are working to prioritize canister snapshot support for NNS-controlled canisters along with our already-planned move of proposals into stable memory so we can stop garbage collecting them.
We do our best to avoid bugs, but in any system of sufficent complexity, they will happen. We will keep working to make them less frequent, and more importantly, less impactful when they happen.
We’re sorry this happened, but we’ll use it to make the IC even better moving forward.
Thanks for this explanation @msumme. It makes sense.
Thanks @msumme
This sounds good. I’ve been considering setting something similar up and checking for any deviation in data returned from public methods, mostly as informational insights (not necessarily an error if there’s deviation). The deviations are then something that can be checked and justified, else used to pinpoint unexpected side effects.