Let's Review the AO Whitepaper's Characterization of ICP and AO's security model

AO is a general-purpose computing platform coming from the Arweave ecosystem. In my opinion it is the most promising and similar project to ICP in the Web3 ecosystem.

I would like to ensure that AO characterizes ICP correctly (I can reach out to the team during this draft process). I would also like to know if AO is a serious alternative to ICP, and to push ICP to improve where necessary.

These goals may also be broadly interesting and useful to DFINITY and the ICP community.

AO now has a whitepaper. Here’s a link to a recent draft: https://arweave.net/7n6ySzBAkzD4KZoTviHtskVlbdab_yylEQuuy1BvHqc

I would like to invite all interested and especially knowledgeable engineers and researchers from DFINITY to comment on the characterization of ICP in the whitepaper.

I also invite critique into AO’s security model, as a continuation of the discussion here: Let's solve these crucial protocol weaknesses

Tagging some people I would love have look at the paper’s ICP section and security model in general: @Manu @ulan @free @bjoern @PaulLiu @timo @victorshoup

P.S. Would be very neat to get a paper or other analysis like Proof of history: what is it good for? but for AO.


I haven’t taken a deep look, but searching in your quoted whitepaper draft the word “verification” turns up 2 counts, and “verify” only 1.


We have discussed ao with tons of professionals in this industry, the conclusion is so simple:

ao = Arweave Ordinal.
ao processes = Ordinal indexers that can exchange messages

That’s it.

It replaced the Bitcoin network with Arweave for inscription storage and carry out off-chain computation. Sure it has “Unbounded resource” for computation because it has no consensus mechanism. Basically it has nothing to do with “blockchain”.


I’m interested in this as a developer using an ICP + ArWeave stack. Not as a protocol-level expert, but I’ll add some observations based on experience.

The whitepaper introduces ICP as having inherently limited scalability because it requires consensus on results, not the inputs, of computations. Meanwhile every other major blockchain performs consensus on results, and ICP is kinda-sorta the most scalable. I think this language unfair until it specifies what compute/scalability limitations are being referenced.

Everything else seems a fair classification, albeit without mention of the tradeoff involved in having no fixed node incentives, a governance-heavy approach, or one size fits all security.

While we wait for AO, I think ICP folks should be more open to the idea of using other chains in their stack. The common rhetoric here that trips up newcomers is ‘store everything on ICP’. Than after searching/building an orthogonally persisted database (@lastmjs you remember pseudograph), you find something like ArWeave that is GraphQL on unlimited data in a few lines of code. While we wait, using ICP this way removes most protocol weaknesses. For those who love the Actor Model, this will likely remain the perfect combo.


Really interested in what Dfinity team members have to say, why AO can claim they are able to train LLM’s on chain? As I remember they went from off chain smart contracts in their docs to “train Ai on chain” very strange that.

1 Like

No they cannot. I’m not yet aware of any LLM training process that is completely deterministic, which is a prerequisite to running it either “on chain”, or “off chain but verifiable later” like in the case of AR. So unless they made significant breakthrough, or they are not LLM.


I’m not sure they said training, if I remember correctly they’re talking about inference. But the question is if AO’s computational model can be considered on chain.

1 Like

Hey there, this actually is possible to do on AO and was done live yesterday. Here’s the video: https://www.youtube.com/watch?v=e8uSxTXnlsw

Here’s a breakdown on that video as well published by some from our team (building on AO): AI on AO: Key Highlights - Community Labs Blog

Hope this is helpful. Interested in contributing to the discussion not to position AO as superior but to make sure both technologies have correct representations of the facts :slightly_smiling_face:

Still a fan of ICP!

1 Like

Just to share some random comments after having a cursory look at the whitepaper.

I think the two most significant pieces in this whitepaper is “AO-Sec Origin” and “SIV” (for which strangely I cannot find any definition, so I don’t know what it stands for). Both of them would resort to more traditional consensus mechanisms. For example, here is a snippet:

This is quite understandable, because without consensus you would never be able verify if a challenge is sound, and subsequently determine whether a stake should be slashed or not. However, it is not explained in the paper how many nodes are required to run the consensus protocol for “AO-Sec Origin”, what exactly these nodes have to do in order to verify a challenge, and how this consensus protocol can ensure security on its own.

For example, if we consider a challenge that claims a CU has misbehaved. Does a “AO-Sec Origin” node re-run the computation by itself in order to verify? Does the node delegate the “re-run” to randomly chosen other CUs? I think more explanation is needed here.

Another repeated theme in this paper is that everything, including security, is “customizable”, as if it is a good thing. But I disagree. As anyone familiar with information flow security analysis would tell you, “High” security does not compose with “low” security to become “mid-level” security.


I’ll also add that this whitepaper is definitely a step towards providing more substance behind buzz words, but I’d maintain my previous conclusion:


Thanks for the links. They are helpful for people who want to know the latest updates, but they are also “not helpful” being a reply to my comment, because I don’t think training was mentioned (at least not in the blog article, because I don’t have time to watch full 2-hour video).

So maybe the correct conclusion is “AO can run deterministic LLM inference computation using Wasm64, but cannot yet run LLM training”.


I’ll link some of my previous AO thoughts below:

Note: I have not kept up with AO since mid March…not sure if anything has changed or not.

Note 2: Since mid march I’ve been a couple days of programming away from having an operational AO CU running on the IC. I’m sorry…I’ve been busy. :slight_smile: The goal after that was to make a SU. A CU + SU on the IC is probably the most secure, straightforward, and well-architected AO configuration at the moment.(Unless things have changed since march)

Note 3: My general feeling is that AO does a better job at mandating data permanence, but that is doable on the IC. There may be some things that are out side the IC performance-wise, but as Paul mentions, the faster you go the harder it is to prove what you’ve done and for others to confirm it.


This is exactly the point. Who runs the “AO-Sec Origin”, who governs it, how it can determine the correctness of other processes? Re-execution? ZK? Where is the code for that function?

I bet after the AO team actually built the consensus mechanism using the AO-Sec Origin, they’d find themselves re-invented the wheel of Ethereum PoS, or (not likely) ICP.


I send one image to two ao processes with the same AI model.

One tells me it is a cat. The other says it is a dog.

Which one should I believe?

Please don’t ask me to run my own AI model to verify by myself. If I can do this, I wouldn’t need ao in the first place.


Straight from the AO’s Whitepaper using Gemini AI:

AO’s approach to consensus is quite unique and differs significantly from traditional blockchain models:

Lazy Evaluation and Holographic State

AO doesn’t directly reach consensus on the state of computations (the outcome or results). Instead, it focuses on consensus on the input data (the messages) and their order. This is achieved through:

  1. Scheduler Units (SUs): They assign a unique, incremental number (a slot) to each message received for a process. This ensures an agreed-upon order of messages.

  2. Arweave Persistence: The assigned message and its slot number are then permanently stored on Arweave. This creates an immutable log of messages and their order, forming the basis for consensus.

This combination forms what AO calls a “holographic state.” The actual state of a process (its current memory and data) isn’t constantly calculated by all nodes. Instead, it’s implied by the message log on Arweave. When needed, a Compute Unit (CU) can calculate the state by replaying the message log from the beginning.

Decentralized Computation

Unlike traditional blockchains where every node does the same computation, AO delegates computation to specialized Compute Units (CUs). These CUs compete to offer their services, and users or messenger units choose which CU to use based on factors like price and performance.

Trustless Verification

While CUs perform the computations, the results are verifiable because:

  • Deterministic Execution: The execution environment (the virtual machine) is deterministic, meaning the same inputs will always produce the same outputs.
  • Message Log on Arweave: The entire message history is available on Arweave, so anyone can verify the results by replaying the log.

Benefits of AO’s Consensus Mechanism

  • Scalability: By not requiring every node to compute every state, AO can scale to support a massive number of processes.
  • Efficiency: Computation is only done when needed, saving resources.
  • Trustlessness: The results are verifiable by anyone due to the deterministic execution and the immutable message log.

Key Differences from Other Blockchains

  • Ethereum, Bitcoin, Solana, etc.: Rely on on-chain computation, where all nodes participate in every calculation. This limits scalability.
  • Akash: Provides a decentralized marketplace for computation but lacks the trustless guarantees of AO’s verifiable results.

TAO and the Internet Computer Protocol (ICP) share a common inspiration: the Actor Model of computation. However, their approaches to consensus and computation differ significantly, leading to distinct advantages and trade-offs.


  • Actor Model: Both AO and ICP are built around the idea of “actors” (processes in AO) that communicate through messages. This provides a natural framework for concurrent and distributed computation.
  • Focus on Scalability: Both aim to address the scalability limitations of traditional blockchains.
  • WebAssembly (WASM): Both utilize WASM as a virtual machine for executing code, offering flexibility and performance.


Feature AO Protocol Internet Computer Protocol (ICP)
Consensus Mechanism Lazy evaluation of state. Consensus on message logs stored on Arweave, not on the computed state. Chain-key cryptography and Probabilistic Slot Consensus (PSC) for fast finality.
Computation Delegated to Compute Units (CUs). Users choose CUs based on price and performance. Performed by replicas within subnets.
State Verification Through deterministic execution and the immutable message log on Arweave. Through consensus among replicas.
Scaling Approach Horizontal scaling by adding more CUs. No inherent limit on the number of processes. Vertical and horizontal scaling by adding more powerful nodes and creating more subnets. However, there are practical limits on the number of subnets due to the need for cross-subnet communication.
Trust Model Trustless due to verifiable computation results. Requires trust in the correct implementation and operation of the protocol and the honesty of a majority of nodes.
Smart Contract Integration Easy integration with existing Arweave smart contract platforms (Warp, Ever, etc.) through the unified message-passing layer. Smart contracts are natively supported within the ICP ecosystem.
Development Experience Familiar to developers experienced with message-passing and actor-based systems. Requires learning ICP-specific concepts and tools.
Current Status Active development, testnet launched with basic features. Mainnet live, but has faced challenges with scalability and adoption.

Which is Better?

There’s no one-size-fits-all answer. The best choice depends on your specific use case and priorities:

  • AO: If you prioritize trustlessness, verifiable computation, and the flexibility to integrate with existing Arweave smart contracts, AO might be a better fit.
  • ICP: If you need high throughput and fast finality, and are comfortable with the trade-offs of a more complex system and a higher degree of trust, ICP might be a better choice.

Reading @lastmjs’s tweets, I feel that @lastmjs is beginning to lose confidence in ICP, just like many other developers. Does @dfinity need to reflect on why so many developers have lost confidence in ICP and left the community?

By the way, I am one of the developers.


AO’s approach doesn’t look like can solve cybersecurity issues to enterprise, data that power AI doesn’t seem like will be safe and private (encrypted) by default. AO in my point of view is something that can be easily implemented in to ICP as a base layer providing security BY DEFAULT.

@PaulLiu can you correct me please, but ICP value proposition at this point in time according to dominic’s vision, it’s that AI systems will be unhackable, tamper proof and unstoppable, all data deployed in to ICP via canisters will get the same properties that just blockchain provides, does this properties will be enabled by default with AO’s approach? Or this value proposition is lost on AO network? Thanks

What happens in AO when you replay all input data (the messages) and you get a different result? As of my current understanding AO does not have an answer to this fundamental question.


Why are you losing confidence in ICP? Is it related to the tech stack or the price?

You would not have achieved consensus, just compare the results right?

Maybe use more than two CUs. 2/3, 10/13 CUs etc depending on the level of security you want.