I’ve put together a set of ICRCs that together form a wasm registry capable of deploy and upgrading canisters via governance(a sort of formalization of what the SNS does plus some other helpful things). One of the items specifically discusses a framework for having third parties verify builds. The actual process, incentives, and security of doing so were purposely left out so that various things could be tried with the same interface.
Specifically ICRC-26 lays out the interface and associated icrc-3 blocks.
As part of this I reserved some space for ICRCs for the actual build process:
I’d love viewpoints (and maybe contributions?) from @icpp for c++, @timo for motoko, @lastmjs for Azel and Kybra, and I’ll let you rust guys hash it out. :). I don’t know if there is a huge hurry for this but I thought I’d reserve the space as I assume we’ll eventually want to put something down on digital paper. I’ll probably take a swing at formalizing what Timo’s done on the motoko side if Timo doesn’t do it first, closer to when we actually release something as many of the initial modules we’ll put into the system will be motoko.
The Grants for Voting Neurons program has demonstrated that developers are willing to perform this work when appropriate incentives are offered. It’s specialized work that nobody will do for free (especially not for every proposal), but our current tokenomics model doesn’t carve out any incentives to support developers who would be willing to perform the work. Verifying build reproducibility is a key deliverable (but not the only deliverable) of what these reviewers have been doing for a while now. The program needs to be expanded to enable more reviewers to participate and at some point it needs to be moved from DFINITY funded grants to some other incentive structure sourced from the NNS (and SNS). However, there are still some NNS (and SNS) framework improvements that are still needed to help support the effectiveness of these kinds of incentives.
I believe that there are at least 2 features that are part of the upcoming Nucleon milestone on the Roadmap that will help the situation…incentivize exceptional voting and NNS known and named neurons. Hopefully what will happen (there are several forum discussions) is that incentives will be offered that attract many credible and reliable reviewers for each proposal topic and the known/public neuron registration allows them to signal which proposal topics they commit to reviewing. Then the NNS dApp displays a list of all reviewers who specialize in each topic instead of listing all known neurons. This would give people the ability to know who can be followed on each topic.
These changes alone do not solve the problem fully because it doesn’t provide any motivation for people to choose a Followee other than DFINITY, but DFINITY does listen to these reviewers and take their opinions into account before casting their own vote. It won’t be perfect at first, but it’s a start and enables neuron owners to start recognizing names that are committed to specializing on specific topics. There are actually a lot of people who would be willing to perform this work, but the governance framework needs a lot of change to properly align incentives. It would be awesome to create opportunities for developers working on projects on the IC to earn supplemental income for performing work that directly helps protect and decentralize the network.
Thank you for your patient reply. However, I have a small question. For instance, if we want to use incentives to drive this initiative, such as when I say “verified,” how can we determine whether I have actually completed this verification? This is crucial for ensuring the fair distribution of incentives.
In the current work process, a reviewer is required to write a review and post it on the forum in the same thread where the proposer announces the proposal. This includes showing evidence of their build (screen capture) as well as a description of their observations and opinion of the proposal. That enables others who are skilled in the art to evaluate if a reviewer is credible or not and ask questions. The reviewer then has to defend their review. I think there is a role for auditors here as well. Basically, people who are incentivized to hold reviewers accountable and keep their reviews honest. There is actually a lot of peer pressure to that effect already if you look at the banter you see between various reviewers and others in the community, but formally with the current grant program DFINITY actually acts as an auditor. They require all reviewers to post links to their reviews in a spreadsheet so once a month someone from DFINITY can go verify that the work was actually performed. All of this can be improved, of course, and I expect it to be improved over time.
It sounds quite rigorous, but I’m concerned that it might become ambiguous in practice. Perhaps we also need to standardize this process. Additionally, it seems somewhat cumbersome and tedious, even though there are incentives involved.
@wpb Through our discussion, I have reflected on my idea, and its ultimate goal can be summarized as automating the review process through a smart contract (canister). When a developer proposes an update to a piece of software, they need to pay a certain amount of tokens (similar to the incentives we mentioned) to what we call the builder canister. Other users can then view the execution results of the canister. The builder canister (or developers of the builder canister) receives the incentive, the developer gains trust, and the users achieve a sense of security.
This also somewhat resembles the DecideAI verifier, which assigns a “unique person” label to an account. Here, the builder canister verifies the software, and similarly, it could attach a small “built by canister” label to the software.