Let's Review the AO Whitepaper's Characterization of ICP and AO's security model

Right, sure you can use more CUs for one task. But how many? What is the security level and how is it calculated? What happens when CUs misbehave? Who is going to coordinate all these tasks? How does staking take place? Who governs staking and how is this process decentralized? Which token is used? How does this token relate to the security of my system?

I can hardly call the ao a trustless/decentralized system before all these questions are answered.

4 Likes

Right, sure you can use more CUs for one task. But how many? What is the security level and how is it calculated? What happens when CUs misbehave? Who is going to coordinate all these tasks? How does staking take place? Who governs staking and how is this process decentralized? Which token is used? How does this token relate to the security of my system?

Just curious, but aren’t these questions relatively easy to solve? Like, just by using Proof of Authority or something similar, by each set of actors/subnets interested.

Show me the code in production?

It’s a new project, I don’t think we should be this hard :slightly_smiling_face::

Basically it has nothing to do with “blockchain”

Specially taking into account that the ICP has its own set of “weird blockchain design choices”, as a blockchain OG recently told me (not Vitalik, but close enough):

In my opinion, the idea of providing maximum flexibility has its merits, and we should be looking at how to cooperate rather than attack them.

2 Likes

Your og forgot his favorite blockchain can be shut down just by a single cloud provider that runs on a single jurisdiction. So his smart contracts will be gone way easier

As it already happened in solana recently, not a supposed but a real fact.

I think the AWS issue is overblown and ICP might be more vulnerable in this aspect despite having 100% independent nodes.
In a free market everyone goes for the option which offers the path of least resistance, in crypto’s case that means running nodes on big tech cloud, but there is nothing forcing them to, nor are these protocols being developed with a set of assumptions relying on AWS to function properly.
If tomorrow ETH nodes were banned from GCP or AWS, there’d be a temporary outage at best, then nodes would come back online with improved decentralization for the network.

On the other hand if an international institution like the IMF were to decide anything DeFi related is illegal, we’d only have 2 options: delete the canisters or move them to a subnet with nodes run in non compliant countries, which could potentially be very few and severely impact the user experience and decentralization of those subnets.

2 Likes

You are discussing the whole value proposition of ICP haha, this is something that’s why i’m betting here, nothing new. That’s the point be able to move things between jurisdictions and avoid censorship.

This is a bit naive thinking. AWS has central authority, meaning if the US tells them to shut down a specific service worldwide or face restrictions in the US, they would comply.

If tomorrow, cloud providers ban all crypto nodes, sharded blockchains will be screwed and encounter data loss since data is not replicated on all nodes.

If governments want to ban crypto, they could easily do it by imposing a 10-year jail sentence for people running nodes, forcing ISPs to filter out the traffic, and thereby causing trust in crypto to plummet. It could only remain as a niche thing. This could happen if only the US and EU agree to ban crypto, leading to exchanges shutting down and prosecuting those who allow US/EU citizens to trade.

As for ICP, the NNS control is what makes it great. It’s not a dark web. People often confuse a full-stack blockchain with existing token databases. Imagine running child trafficking services and other illegal content without the ability to prevent that; governments would hold all node providers accountable. We saw Bitcoin and Ethereum transactions being censored, and non-US citizens prosecuted.

The ability for ICP to enforce compliance through NNS gives developers and enterprises assurance that it has a global aim and would not be stopped.

3 Likes

I think you are misguided here. Number of replication is NOT the whole story.

Suppose 10/13 is a good threshold, and there are indeed 13 unique and independent CUs. Now 10 of them have computed output message B from input message A. Can you now trust message B and take your own actions based on B (e.g. buy or sell on another market)?

No, you cannot because it is unsafe. You don’t know whether A was computed correctly in the first place. These 10 CUs are not responsible for verifying A’s correctness either, and their stake won’t be slashed even if A turns out to be wrong, because they computed from A to B correctly.

The only sensible decision in a opportunistic setting is to wait until message B finalizes, which usually would then imply A has also finalized. I’ll just quote myself again:

3 Likes

We can also compare this to optimistic ETH rollups with a synchronous messaging model, like Arbitrum.

Arbitrum gives 7 day challenge window, and also each bonding party (both the state submitter to L2, and the challenger) need to put down 3600 ETH. Note that the state submitter is responsible for ALL state, not just a tiny part of it like in the case of AO.

AO’s whitepaper does not mention concrete numbers, but I fail to see how any setting could realistically work. Too short a challenge window or too little at stake would be insecure. But you also can’t ask for too much stake of each individual unit because no one is responsible for ALL state.

4 Likes

Isn’t the idea of this “chain” to be flexible? So each dapp can choose its level of security by adjusting parameters like stake, window, and set of units. Some dapps might prefer a minimum level of security and execute immediately, while others might require higher security with a longer window and a fixed set of units with a high stake. And I guess these levels of security will not get mixed, etc…

2 Likes

Message A must have been the result of correct computation yes. On ICP the security of message A is backed by the replication factor of the nodes that computed the message and signed it (and you just follow everything back to the first message, each hop is checking the signatures from the subnet that supposedly executed and signed it). That’s what I’m saying about replication factor on ICP, all of ICP’s security is rooted in the size of the subnets involved in computing those messages.

Is this understanding incorrect?

If you read my last reply to @lastmjs , it was a concrete example where a dapp can’t choose the security parameter of it’s upstream’s upstream. The whole security setting thing is flawed due to the lack of finality.

Isn’t it easy to verify that A was computed correctly? And inductively, everything else? For instance, if we assume that everything is computed/attested by the same 13 (staked) CUs (or any actors needed for the whole process), isn’t finality something that should and can be built on top of the protocol they offer? I feel that you are taking the draft very literally, whereas they just gave a light overview of how things can work at the base, leaving the rest to be built on top.

No one realistically will be building none of those things, the world always take the option that solves by default everything, and takes complexity out, if your product instead of removing complexity adds more it will fail. Why would enterprise should the hardest way and insecure one over the proven by pure cryptography and not just “ optimistic” one

No, it is actually very difficult, because in an async world, no one has global state. If you do, it is basically reduced to a single chain, already not as scalable.

It is clearly not AO’s vision to let everything be computed by the same set of CUs. You are making an assumption that this whitepaper does not make.

I mean, not everything on AO, just everything from the interacting set of processes. I would guess it’s possible to build something like the IC on top, and then much simpler systems. I see the draft paper as “we have decentralized permanent storage, so here is our vision to build smart contracts on top as flexible as possible”, not as something super polished or final. Perhaps you have a better idea on how to do it given these constraints?

You will found yourself building an entire blockchain, but its ok seems like you love over complicate your life.

It literally took 4 years for the Ethereum Foundation to design and develop the PoS version of Ethereum, from 2018 to 2022.

I wonder why everything seems so easy to solve at your side? Maybe you can share your code to make the ao really trustless and decentralized now?

1 Like

I wonder why everything seems so easy to solve at your side? Maybe you can share your code to make the ao really trustless and decentralized now?

I’m not in AO and not planning to work on it. I think they can rely on Arweave when needed, since they have BFT consensus there. Or any other blockchain, I guess.