Is my 5-year dream of ICP and DFINITY shattered? Is there still a future? SNS-1 Decentralization Sale questions

@lei I truly appreciate your extensive feedback. As @Manu writes above, we plan to have a first response/summary about the lessons learned from the SNS-1 launch posted later today, addressing most of your SNS concerns. But your feedback goes beyond that. We will not be able to answer all questions within a day but I will make sure we answer them as well in the next days.

23 Likes

In this matter and what we have experienced last few days with the SNS-1, I posted my worries about the SNS-1 and BTC integration on Discover, 8 days ago.

The ease ‘bots’ could make use of the contract before launch, It made me question the BTC integration:

  • Are we fully certain this is attack-proof in any way?
  • Has it been audited by any external party?

Especially when it comes to something big as the BTC integration, we as a community cannot handle any failure. The way the Dfinity-team has evaluated the SNS-1 product upon launch, makes me question the safety of all products… Sadly (and probably not even fair), but present.

I think I have summarized the community’s technical requests, in the second part above, some of which have been backlogged for a long time.

1 Like

Obviously, there is no such thing as 100% security but the Bitcoin integration has undergone a significant amount of testing and verification, on the theoretical as well as the implementation level, both internally and using an external party.

6 Likes

Why do we call the botting of SNS 1an attack when the “attacker” played within the rules?

2 Likes

I think this is a comprehensive summation of community feedback and smart investor diligence. Kudos, @lei! I appreciate the responses from each of the respective DF team members.

Due to the broad reach of the feedback, I think this post would be a great one for Dom to respond to. I’d like to know his responses, from his top-down perspective. I respect him as a world class cryptographer, but I’d like to respect him more as a CEO of the most innovative application of blockchain technology on the internet. If that is not the role he is MORE interested in filling, maybe he should hire a dedicated helmsman, and stick to the crypto?

10 Likes

I suggested that Dfinity create a CEO position a few months ago but the community didn’t want to entertain a discussion. Some Even threaten to quit ICP if that were to happen. It was seen as a move to kick Dom out of his project, which is far from the truth.

2 Likes

Great post, we need more voice like these from comunity

2 Likes

I believe they used Trail of Bits for the BTC integration audit and another unnamed auditor (potentially independent).

It’s not so much a challenge with II, as a challenge with missing functionality. The Internet Identity system is designed to prevent a user being tracked across dapps in default usage. Each dapp sees a different pseudonym, not the original II anchor. The way to solve this is with II “capabilities,” which also solve for smart contracts that run at different security levels. For example, imagine a Web3 gaming smart contract tagged “Game” that is being run on a single node subnet (i.e. at the mercy of the operator, with no security, but running with high efficiency). How could that be trusted to send an instruction to the game’s “bank” smart contract that has been tagged “Fiduciary” (i.e. that is running on a 34X or higher subnet, which also has a bunch of features to make it very secure, but which increases cost beyond which would be acceptable for in-game action). The answer is that it would ask the user to sign a capability using II, which would be human readable (e.g. “Scope: Game XYZ Bank\n Action: Transfer\n What: BTC\n Amount: 100,000 satoshis\n From: Shoan\n To: Dom”). The game smart contract would call the bank smart contract instructing it to make the transfer, passing the capability in a parameter, and the bank smart contract would validate the capability before executing the transfer. There are a number of advantages with capabilities, including that when user interacts with Service A, and Service A instructs Service B, they do not have to trust that Service A will properly instruct Service B. There are a huge number of problems with the fixed principal model used on Ethereum and other blockchains, and they extend far beyond privacy into security (Googling security capabilities will go through them in depth). Adding the capability signing feature to II should not be difficult and is urgently underway on my instruction

20 Likes

Proof of Personhood via People Parties is the ultimate solution for secure airdrops (or for boosting neuron voting power on a per-anonymous-human basis, and things like that), and people could even prove personhood specifically to participate in something like an airdrop, albeit there may be other solutions that trade security for less friction. It’s a shame that People Parties kinda got deprioritized, and there’s a lesson there. One of the reasons this happened is that in some quarters of the community there were a lot of loud voices that said we shouldn’t work on Internet Identity, the SNS framework, or the People Parties framework, among other things, and that only the community should work on such infrastructure. People Parties were partly deprioritized for that reason as a concession. In hindsight, I think we can all see that was a mistake. Lots of projects really need that functionality, and it hasn’t just magically appeared. We need the DFINITY Foundation to develop and contribute these foundational components, to guarantee that they are available for use by everyone. If people want to build alternative systems, or extend what DFINITY contributes, fine, but DFINITY should not step back because the ecosystem is still nascent. Also, I hear you that there ARE community solutions out there, and I think that if they are working and sound, they should have been used in the SNS-1 launch. There needs to be an evaluation of whether they can be incorporated into the next SNS launch and decentralization sale (worth remembering that SNS-1 was conceived a test that would reveal problems, which it certainly has, so the next will be different…)

18 Likes

We need to wait for the forthcoming postmortem to understand what happened here. This shouldn’t happen in any system of exchange, let alone in a crypto one. My guess is that it relates to smart contract asynchrony, and requests being made more than once. For example, the user sends a transfer, and then sends another transfer before the first has been received/processed. This results in the first transfer resulting in an exchange for governance tokens, and the others being returned to the sender. If this is the case, we need to look into design patterns that can prevent this. But I will wait until the postmortem before assuming what the cause is.

10 Likes

a question: if the game smart contract is controlled by a single node, how to prevent from censorship attack, namely how to make sure the game smart contract will send the signed capacity to bank smart contract?

As someone who liked the people party idea a lot I’d like to share my point of view:

The idea was proposed at a time when there seemed to be a lot of feature creep on Dfinity’s mind: People parties, Badlands, Endorphin plus all the proposed protocol improvements.
This gave many the impression Dfinity was spreading its resources too thin across multiple projects, People Parties in particular were criticized cause:

  • The proposed design had and still has some flaws, some that could be used to game the system and are fixable and others which have to do with usability and can’t be mitigated due to how PPs work, e.g people have to do a tedious action every once in a while at a scheduled time.

  • Exception made for the NNS integration for governance rewards, they can be implemented by the community.

So why hasn’t someone built them? I think there are multiple reasons: some are skeptical about their design, others don’t like the idea and others thought Dfinity was already building them.

You gotta admit communication has been quite lacking on this front. Up until May/January they were on the roadmap and scheduled to release in summer. I had planned to participate in Supernova with a dApp based on the PP PoP thinking it would have been released in proximity of the hackaton’s end! It was never communicated they had been dropped until I asked on Twitter!

Not many have the resources to compete with Dfinity, so if we know you guys are already working on something it doesn’t make sense to waste resources and time to be outpaced by a better team. As of recently I’ve been working on a variation of your PP design in my free time and the recent talks on the subject and uncertainty on whether they’ll be reprioritized or not on Dfinity’s own roadmap got me thinking about switching project.

The same probably happened to other teams which have decided to focus on other use cases, e.g @modclub

Maybe it’s not that if you don’t work on it nothing gets done, but if you announce you’ll work on something it’s less likely the community will start working on it, simple as.

14 Likes

Curious if SNS1 is a test run, why there is no public testnet for end users to participate

Regarding transactions, we should recognize that a single transaction on the Internet Computer usually involves thousands of times more compute than a transaction on, say, Ethereum. Nonetheless, we need to understand what caused the congestion. Unfortunately other commitments have prevented me deeply digging in to what happened yet, but here is my initial understanding.

  1. The subnet was actually processing vast numbers of Update TX. The problem was that the SNS website uses Update TX to pull the data into its pages for security reasons. As congestion grew, and people were refreshing their pages, more and more Update TX were being created! Now, obviously, every Update TX has to pass through consensus, and involves a lot of cryptography, which is why the user experience should primarily rely on Query TX to obtain data (more on why that wasn’t done in a moment). Therefore, if we want to stick with the current SNS design, which is security optimized, then we need to make the processing of Update TX much more efficient. Since this is a good thing, and some of the other fixes will take longer, this is what we plan on doing…

  2. As it turns out, the bottleneck with Update TX was inefficient cryptography under load. Specifically, when there are large numbers of Update TX, the node machines are verifying the signatures on the TX individually. This is completely unnecessary. There is now work underway to do batch verification of the signatures on Update TX, and perform other optimizations, which will reduce the expense 100X or something (ask Jan C for exact…). We will have this in place before future SNS launches, albeit, given continuing ecosystem growth, I think it is credible that the load on future SNS launches and decentralization sales will be even greater. Therefore once we have optimized Update TX efficiency, we will move on to adding new Query TX functionality, so they are used rather than Update TX in creating a web page (this was always what as intended)…

  3. Currently, all canister smart contracts export a Merkle root, which represents internal data that they might wish to return to those issuing Query TX. The smart contract developer only has to add the data structures inside their smart contract’s persistent memory pages to an internal Merkle tree, whose root is exported, and their data and objects will be automatically “certified”. This is because every 1-2 seconds, the blockchain signs the root of a Merkle tree over the roots of the canister roots, using a subnet chain key that itself is signed by the ICP master (NNS) chain key. This allows data to be returned to those making a Query TX has has a cryptographic certification inside an HTTP header, which certification is essentially a Merkle path to a chain key signature. This can then be transparently validated by a service worker running in the web browser (current), or better, a browser that is modified and does the validation itself because it knows the master chain key. The beautiful thing about this system is that it scales without limit. Certified Query TX responses can be multiplied by caching on boundary nodes processing Query TX, say. The developer is pre-finalizing the results of query call transactions with predicable results, or results that have some temporal stability, allowing for vast transaction scaling. However… while this works great for things like HTML, JS, WASM, images etc that are served into the browser to create the user experience – and typically developers use an asset canister that does all of this Merkle stuff for them – it is more of a pain to do with dynamic data, which is why the SNS developers were using Update TX to securely pull dynamic data into web pages. Here are some exciting solutions being worked on:
    3.1 Special Data Construct That Makes Query TX Certification Easy. To certify data such as user account balances, which websites wish to display, the developer has to certify nearly everything by placing it into a Merkle tree whose root is exported for signing. That’s a pain to code up - which, again, is why the SNS developers chose to pull dynamic data into the web pages using Update TX. One of the solutions being worked on is a special kind of ICP-native data construct, which allows any data structure to be formed, and any data set to be maintained, which automatically does the Merklization on behalf of the developer. Developers who do not want the work of messing with Merklization, can then just build all their data structures using this special construct, and everything will be done for them. Someone is already working on that (shout out Johan).
    3.2 Transparent Network Certification Of Arbitrary Query TX. Even the solution above does solve for certifying/pre-finalizing all types of Query TX. For example, what if a query call asks for the hash of a submitted value! There is no way to predict the values that users will ask to be hashed, so there is no way to pre-certify/pre-finalize such results, even using the special aforementioned data construct. Moreover, many developers don’t want to be forced to use a special construct to build their data structures. They want absolute freedom!! So what to do!? The solution again is fairly simple, and will arrive next year. Essentially, when a Query TX is made, the boundary node that receives it, will first submit it to one replica, and obtain the threshold signed result, which will also specify the block height at which the request was processed. The boundary node will then submit the hash of the result, and ask other replicas, which are randomly selected (not by the boundary node), to threshold sign that they obtain the same result when they run the query call at the same block height (this is possible because the Internet Computer can cache recently finalized versions of the persistent memory pages belonging to canister smart contracts). If enough replicas sign in agreement, the threshold signature shares are combined to sign the query call result. Obviously, if a subnet replica is faulty/cheats, a proof can be constructed and it can be slashed too, providing them with an incentive not to do that. In practice, most people will be happy with the security obtained by having the boundary node only query 2-3 subnet replicas, rather than a deterministically secure quorum of f+1 (which will contain at least one correct node, guaranteeing detection of faulty results), reducing the latency added to the query. This method has the wonderful benefit of allowing the results of completely arbitrary Query TX to be securely signed/certified, including even a query call that asks for the hash of a random value, while also freeing the developer from the need to do anything special in their code at all (no need to Merklize, no need to use a special library etc etc just complete freedom). The latency added would be minimal, but the existing method of certifying/pre-finalizing query calls that return web assets like HTML and JS would still be typically used as its adds no latency at all to results.

20 Likes

What an epic string of posts, @lei. Your passion and painful pleas for the success of the IC are undeniable. No one can possibly accuse you and your criticisms of just spreading FUD.

At a high level, I think you would probably agree that most of your sections 1 and 2 regarding technical issues can probably be ironed out eventually, and there seem to be some positive reactions from Dom and others on those points. However, like me, I think what you are most deeply concerned about are the systemic issues in section 3 and in your Final Words post.

These issues are not only part of the root cause regarding the delays and critical fixes in sections 1 and 2, but they are also likely at the heart of why you see your dream being unavoidably shattered. In my view, all of these issues reflect a huge disconnect and power conflict over prioritization. Perhaps the biggest indicator of this conflict is how community-led priorities already approved by the NNS have been consistently delayed and put on the backburner in favor of DFINITY-led priorities.

The root cause of all this conflict comes back once again to what I have been calling the “collective prioritization chasm”, which is derived from a longstanding problem in social choice theory. It ultimately reflects the complete inability of the NNS to aggregate many competing priorities into a consistently rational list of collective priorities. To be fair, this is not just a problem for the NNS or DFINITY. It is a problem that every DAO faces and no large organization (let alone DAO) has ever solved to date.

I have offered to dedicate my time to experiment with solutions to this decentralization problem as part of my upcoming PhD, but DFINITY has declined to accept my assistance. This does not make me hopeful that this systemic conflict between the community and DFINITY will ever be solved or sufficiently reduced. In fact, it will likely only get far worse as the stakes become higher with sizable SNS token economies running on the IC. See my post here for some additional elaboration (and then perhaps we should talk further offline):

11 Likes

Is there a way for the community to solve this issue or is it a core protocol problem?

1 Like

What you say about DFINITY employees not being crypto natives, is true and not true. DFINITY employs a lot of researchers with backgrounds in places like Google Research, and from academic backgrounds. Sometimes they lack some blockchain thinking. However, there are lots of other advantages. For example, they can apply more sophisticated engineering techniques that they draw from e.g. Google, which has allowed DFINITY to operate R&D at scale. It is a simple fact that no other crypto project on earth could deliver the kind of complicated blockchain technology that DFINITY has. Moreover, there are people within the organization who have deep crypto expertise, including me! Ethereum 2.0 looks very similar to one of my 2015 DFINITY designs, which were meant for Ethereum, with a few mods. Everyone now uses BLS, but while I used BLS to generate random numbers from early 2015, and DFINITY employed one of its creators from early 2017, while other blockchain projects took years to start using it. Everyone now is finally looking at using the WASM virtual machine to create their smart contract execution environments, but DFINITY employed its co-creator from early 2017 and has been using it forever. Everybody is now looking at using random numbers to drive blockchains, which was something I pioneered in 2014/15. Aptos is now using a design that runs transactions in parallel, which was something the Internet Computer was doing in 2018. In summary, a lot of cutting-edge blockchain thinking and tech actually comes from the DFINITY Foundation, and the Internet Computer is one of the oldest projects in the crypto industry. Any analysis of the technical history of blockchain shows that we are one of its most important innovators, pioneers and leaders. Albeit there are sometimes gaps in crypto knowledge amongst those working on the project, overall, in terms of our general design philosophy, we know exactly what we are doing.

You also mention decentralization and being permissionless, which is something we also care about, and where we have achieved an enormous amount relative to other blockchain projects. Because this is what we care about, the Internet Computer blockchain runs under the full control of a permissionless governance DAO/the Network Nervous System, which is unique in the industry (every other blockchain relies on a company that orchestrates network configurations and node upgrades behind the scenes). That’s why we created a Proof-of-Useful-Work blockchain that runs on a sovereign network of dedicated hardware/node machines, rather than a Proof-of-Stake network hosted by “validators” that are often just software instances running on Big Tech’s cloud, which are created at zero cost by running a script, and can be switched off by Big Tech at any moment (as per Solana losing 40% of its network when Hetzner switched off its validators only a couple of weeks ago) and are therefore not decentralized. That’s why we built a blockchain that can hosts Web3 and DeFi services that run end-to-end on the blockchain. In every other blockchain ecosystem, the web experience, and nearly all the data and data processing, lives on Big Tech’s cloud, or other centralized traditional tech. The Internet Computer pioneers unprecedented levels of decentralization and permissionless governance. So I think your view there is not fair either.

Regarding the idea that we don’t understand “composability,” I think idea derives from thinking that arose in parts of the community some months ago. The thinking was that each user needs to have a single principal, so that when they are interacting with a service composed of lots of other services, then all component parts can see who the user is (i.e. that the whole thing can be linked up). This approach unfortunately leads to the same issue that currently exists on other blockchains, namely that all a user’s activity can be attributed to them. For example, if I get Bob’s EA (Ethereum account) principal, I can see everything that Bob has done on Ethereum. The people propounding this idea claimed that because Ethereum works this way, this was the “crypto way” of doing it. But this belief is entirely mistaken. Crypto does not want to make it easy for users to be tracked to the extent everything they ever do can be attributed to them. Crypto is concerned about privacy. Ethereum only works this way because there was no time to do anything different while it was being architected 2014-15 i.e. it’s a flaw, not a feature! The solution to composability on the Internet Computer is to use Internet Identity security capabilities, which I have described in an earlier reply in this thread. This allows for composability, while preventing a user’s activity across different Web3 and DeFi services to be doxxed.

33 Likes

If I didn’t strongly believe there is a solution to this seemingly intractable problem, or at least a way to reduce its negative impact dramatically, I wouldn’t be dedicating the next five years of my life to exploring it more formally. I have already been exploring it less formally for many years, so my confidence is not unfounded.

I’m not even suggesting that we bake collective prioritization capabilities into the core protocol anytime soon, if ever. An offline experimental process could simply be used to inform and guide DFINITY (and the community) towards the path of highest collective welfare when making its own prioritization decisions, or in prioritizing proposals already approved by the NNS. Ultimately, DFINITY could simply choose to ignore this guidance, which is their prerogative as an independent organization. Only if the collective priorities from this process consistently resonate with the community and dramatically reduce prioritization conflicts would it ever make sense to incorporate it more formally into the NNS. It would likely take years to run this experiment sufficiently and to build the necessary community confidence, which is why we should start experimenting with solutions now.

3 Likes