Internet Computer Roadmap — Year 4 And Beyond

Hey @ajismyid, there’s a new idea that @massimoalbarello published a few days ago which is related to Solid and hence Inrupt:

You guys should move the discussion in that thread!

cc @skilesare


I agree, will reach out in DM

Glad to see the t-EdDSA signature is on its way. Can you give us a more specific timeline? Like when can we use it in production environment? We can really use it now :laughing:

Plus, any timeline for improved t-ECDSA latency? Currently we are seeing more than 10 seconds delay when using it for signatures. Short signature time can definitely help us a lot.

Hi @CoolPineapple!

Isn’t the shared security paradigm inherent to ICP as it is to IaaS cloud providers? In my opinion it is: ICP provides you a secure platform while a dapp developers is responsible for building a secure dapp on top of this platform.

Do you share this view of ICP already now featuring shared security? Or what do you think is missing on the roadmap to get where you think ICP should be w.r.t. shared security? Many thanks for your thoughts!

Yes, indeed! There are roadmap items called “Public specification for GPU-enabled nodes” and “AI-specialized subnets with GPU-enabled nodes” in the Decentralized AI theme that are thought of taking care of this. Current thinking is that ICP will start with one AI subnet and then, based on demand, create more. How exactly those nodes look like needs to to be decided by the community in the end, but likely they will offer multiple high-end AI accelerator cards per node.


Hi @w3tester!

Threshold EdDSA is part of the Helium milestone which has no firm arrival date yet. However, it is planned to be worked on once Deuterium has been finalized by July 25. The work on Schnorr signatures in Deuterium is prerequisite work that prepares the implementation architecture for threshold EdDSA as well. Threshold Schnorr is structurally very similar to threshold EdDSA, but using a different algebraic structure to operate in, thus Schnorr already anticipates much of the implementation that threshold EdDSA requires. The performance improvements for threshold ECDSA being worked on currently also apply to threshold Schnorr and EdDSA. Considering that threshold EdDSA is not a huge item once Schnorr has been done, I’d wager that Q3 this year might be doable for threshold EdDSA.

Threshold ECDSA latency improvements is currently being worked on so should not be that far out to be in production.


Hi @dieter.sommer

What I mean by shared security is that the security of a canister executing on a subnet should not be at risk if the 1/3 of the nodes on that subnet are compromised (typically comprised of 13 nodes) but rather should be secured by the network as a whole (559 nodes) in some sense.

Scaling while ensuring shared security is what Ethereum is trying to achieve with rollups, Polkadot is trying to achieve with the relay chain/parachain model, and Near is trying to achieve by randomly and frequently reassigning validators to different shards. This can be distinguished from the Cosmos/ICP approach where each subnet is essentially independent and can be separately compromised.

Possible approaches:

  1. Node shuffling. although this doesn’t increase the number of nodes that need to be compromised frequent shuffling makes it harder for an attacker to target a particular subnet via time consuming bribery or hacks since the nodes involved can change unpredictably.

  2. Subnets as Optimistic rollups: Each subnet acts like an Arbitrum (antitrust) rollup where in the happy case execution proceeds as it does now, but if one node within a subnet disputes a fraud proof is initiated. Since ICP lacks a canonical subnet that can act as an “L1” like computation court I suggest either reserving a high replication subnet for this purpose or simply picking a random subnet to handle the dispute since this would be unpredictable to the attacker.

  3. Only High security canisters marked as having a high security demand (for example a ledger canister that holds asset balances) rather than the whole state are re-validated by additional nodes or subnets as an additional assurance. For example:

    • By additional randomly chosen subnet(s).
    • Using Dominics validation towers where a random beacon and staking system give additional assurance that the state of a particular canister has advanced correctly
  4. Some crazy ZK thing.

These are just ideas. The point is more that there should be some kind of program looking at shared security as an objective since this is one of the main criticisms people make of ICP, and while it is not necessary for decentralised social media it is necessary for ICP to be trusted for DeFi and asset applications.


Hi @CoolPineapple, thanks for clarifying what you meant with shared security. At least node shuffling has been discussed AFAIK, but it also has downsides, e.g., all potential attackers get rotated into and again out of each subnet over time, which may even weaken the model (e…g, an attacker could extract the state of the subnet, which is easier now, and much harder once SEV-SNP is available).

Overall, I agree that this is something to have on the roadmap, but I think it’s just not clear what is the best approach, if any, to achieve shared security in the ICP’s model, thus there is no roadmap item (yet) for this. Not all approaches applicable for other chains will work in the ICP’s model of subnets. We’ll keep this one on our mind and discuss this in the Foundation. Once there is some clearly preferable approach to choose, there should be a roadmap item for it. We might also put a roadmap item on already now about exploring shared security, just to keep it in our official “backlog” of things to be done for ICP.

By the way, are you aware of concrete options having been discussed in the community (and links to the discussions, if applicable)?

Thank you again for your input and thoughts about this topic!

Update: The roadmap items about settling the hash of a canister’s state to the Bitcoin or Ethereum network is somewhat related to this, but fulfills only parts of the goals. It is similar to an L2 settling on some L1, but in this case only a hash to guarantee integrity of the data.


I agree. Wow how I wish we could have shared security on ICP. I’ve long hoped that node shuffling could be a promising solution to this problem. There is a lot of discussion publicly available in this forum on the subject.

And as for my simple definition of shared security, or how we know when the property of shared security has been implemented: individual canister security becomes a network effect of ICP.

By network effect, I mean that as the ICP network grows in subnets and/or nodes, the security of individual canisters improves as an emergent property.

More subnets/nodes = more security for all canisters

There is no current security network effect like this on ICP unfortunately. Adding more subnets makes no direct security difference to other subnets. And adding more nodes to an existing subnet makes no difference to other subnets.

Also subnet node operators membership is static. This makes me uncomfortable over long periods of time.

Also the fact that node operators know which subnets they’re in and can find out which canisters they are hosting seems unideal.


Can someone shed light on the privacy theme?

Most of the truly interesting and impactful features for privacy aren’t even scoped into milestones. The only privacy milestone focuses on vetKeys, and I am starting to doubt how impactful vetKeys will be alone, considering that end-to-end encryption greatly limits computability over data, and that users can manage their own private keys relatively easily outside of browser applications. Many apps have end-to-end encryption now without using any kind of decentralized blockchain protocol.

The truly useful innovations will be a combination of (maybe vetKeys) secure enclaves, FHE, and MPC. But they are only listed as future features.

I understand how incredibly difficult FHE and MPC will be to implement efficiently enough to be useful.

But what is taking so long for SEV? Why is it not being prioritized more?

I remember an early version of this being ready before genesis, and there was some public discussion about whether or not to enable it at launch.

There is no technical privacy protection right now for canisters…SEV is possibly the most impactful low-hanging fruit here.


The main reason that for now only the vetKeys milestone has been scoped is that we had to start somewhere, and it made most sense to start with the milestones that are most likely finished first. VetKeys build on many existing protocol components that have been used in production for 3+ years, and the remaining work is fairly well planned at this point.

I agree both with your view that SEV will deliver important value and with your frustration that we have not been able to keep our intended timeline. The reason is simple: while SEV is already by itself a fairly complex piece of technology, operating it in a decentralized high-availability environment, where the administrative control over the nodes is done via a governance system and not direct control of the nodes, makes all steps significantly harder. Two examples:

  • When upgrading replica software, there has to be a hand-over between the old and new version that preserves the integrity, using bilateral attestation between the two versions. (Right now, the nodes just install a new guest OS image and give it access to the same data partitions.)
  • Subnet recovery becomes inherently difficult, since the current recovery process requires accessing the data. This is of course inherently tricky with SEV subnets, where canister data would be expected to be confidential. Now many recoveries in the past could have done without any access to the data; however, this means we need more elaborate procedures and tools for recovery as well.

The path we take for SEV is now as follows: We first start with a use case that will provide us high impact with lower operational complexity, namely HTTP gateways. Upon gaining operational experience, we will step by step move toward the more complex use cases. The most complex one is then arguably running a full replica with SEV support.

A more detailed timeline with milestones will follow on the roadmap, it just isn’t in the appropriate shape yet.


Thank you for the detailed explanation :+1::+1::+1:


Regarding the privacy part, we’d be very much interested to see if ZK proofs can help. ZK enables a paradigm in which computation of user data will take place at client side, not server side. This automatically gives the strongest privacy assumption—your data is most secure when you are the only one holding them.

With zCloak building a ZK verification layer in ICP, we’d love to see real world use cases which require privacy-preserving computations. It would be great if people can post their requirements here and we will discuss how we can help to solve their issues with ZK.


ZK proofs indeed play a strong role for privacy. I propose to create a dedicated forum topic for this and continue the discussion there to have it in one place and make it easier to follow. Looking forward to the thoughts on this! Please link the new discussion topic from here if you want to create one!


Great idea! I have created a separate post to discuss it.


I have a question about the Cyclotron roadmap in the AI theme.

“Allow smart contracts to run inference using AI models with millions of parameters fully on chain.”

Notice it says millions and not billions. Why is this the explicit goal? Obviously models with millions of parameters are, I believe, much less capable than models with billions of parameters, like the most capable Llama models.

I would love insight into if the end goal will eventually be billions of parameters.

1 Like

Hi @lastmjs!

The Cyclotron milestone talks about “millions” of parameters, and not “billions” because it is still using CPU cores for the inference. Reasoning on large LLMs with billions of parameters on the CPU would just take too long per token to be practical.

Large LLMs with billions of parameters require GPU support, which is the key part of the Gyrotron milestone. Once a deterministic API for smart contracts to use GPUs as well as Wasm64 is available, large LLMs with billions of parameters should be practically feasible also on the Internet Computer.


Do you have any insight into why a deterministic API for GPUs is difficult to achieve? Or is it difficult to achieve?

In the STELLARATOR milestone, the tile Improved consensus throughput and latency says this:

“Improved consensus throughput and latency by better, and less bursty, node bandwidth use. Achieved through not including full messages, but only their hashes and other metadata, in blocks.”

Will this help to increase the message size limit? More discussion here: Hashed Block Payloads