Let's solve these crucial protocol weaknesses

  1. Instruction limits

My understanding is that this is already in production with Deterministic Time Slicing does this not already cover this issue?

Do they just need to increase/remove the limit or is something else required?

  1. Memory limits

Do you mean WASM memory and ease of sharing across multiple canisters?

  1. High latencies

Queries can be cached but for updates. I really can’t see how this can be solved given the need for BFT. The consensus latency is about as low as it can go baring some novel cryptography or completely rearchitecting as rollups. And from my POV subnet replication factors are about as low as acceptable anyway.

What it sounds like you are suggesting though is that there is need for more flexibility to choose very low replication factors for low risk Dapps and I recall a game dev saying something similar. But at some point that just becomes a single server without any replication and correctness guarantees.

One left field idea might be to allow pass though P2P communication so that applications can just work with low latency by exchanging operational transformation/ crdts and only update the consensus state to save snapshots or where trust is important.

Thus for example a game would proceed with players sharing signed messages, updating their game state locally and bypassing the IC consensus nodes most of the time. With the IC saving a snapshot at random intervals. In most cases play would happen with very low latency but in case of dispute you would run a replay of say the last 30 seconds of signed messages since the last snapshot and let the canister decide the state.

Variant would have pass though nodes just notarise the messages as seen without updating state.

  1. Message size limits

I think (from your twitter thread) you are specifically referring to message payload size here with reference to file uploads. Chunking is already implemented for canister WASM uploads so making that for general file uploads and providing grants for tooling around that would seem to fix a lot of the issues except it still would be slow.

  • Perhaps there is a way to upload and download in parallel using multiple boundary nodes or perhaps specialised large file server nodes.
  • I worry about DoS attacks if the size is increased unless some metering is applied.
  • Perhaps there needs to be a separate pure storage system.
  1. Storage limits

Think something could be done about making file storage a transparent service across multiple subnets but again this seems to speak to the need for specialised file storage. I would not however that Filecoin and Arweave are both subsidising storage with issuance. So I’m not really convinced their model is sustainable.

Perhaps the play here is not for the IC itself to provide large scale storage but to deeply integrate with existing file storage networks like Filecoin, Arweave and Ethswarm.

BTW Might be an interesting play for @dfinity to team up with Ethswarm as they are almost completely overlooked and underused so would should in theory welcome both funds and collaboration. Though perhaps they are politically too cyberpunk and Ethereum orientated to accept a deal.

  1. High costs

Elephants in the room here are:

  • Most networks subsidise via issuance. ICP doesn’t but it has unnecessarily high NNS rewards.
  • Costs have to be multiples due to replication.
  • Subnets are not actually net burners of ICP so if anything devs are being undercharged given node rewards. (There are potential pricing models which square this circle but they mean more uncertainty about costs)
  • Dev pays model means costs fall on devs and also that we don’t benefit from MEV burn.
  1. Rigid network architecture (subnets static, canister unable to choose replication/security with flexibility, can’t move between replication factors, homogenous hardware required)

This is true but nothing stops more flexible systems being built on top of the IC. I think there is potential for incentivised service workers and storage services to be built on top. That this isn’t happening speaks to a cultural problem that the IC is positioned as a one stop full stack and this discourages infrastructure investment by parties which are not Dfinity.

  1. Centralizing DAO governance (one entity able to gain write access to the provisioned protocol, lack of checks and balances)

This is huge and fundamental and solving it would require not just technical breakthroughs but a change in philosophical direction. It would also mean confronting the moral issues around building an actually uncensorable network.

10 Likes