What do you need from ICP in 2024?

Trusted Execution Environments/Secure Enclaves

Finally to a high probability of shipping! I think this is it, this is the year that we get a major boost in private computation from TEEs/SEs.

If you aren’t familiar with the benefits here, just take a quick think: right now all canister data is stored in plain text on subnets comprised of various node providers. There’s not much stopping them from reading all of that data, if they were motivated to do it.

Thus we are cutting out a large number of use cases because the privacy story is just too poor right now.

TEEs/SEs increase privacy by providing a secure hardware environment for private computation. It’s not perfect, but an improvement to the status quo.

See this thread for more info: https://x.com/lastmjs/status/1736088865541103617

1 Like

Verifiably Encrypted Threshold Key Derivation (VetKeys)

This is a similar technology to TEEs/SEs (kind of), in that it also helps to improve privacy on ICP. VetKeys are focused on end-to-end encryption though, not general-purpose private computation.

Still, VetKeys will open various use cases, and in combination with TEEs/SEs may provide some compelling privacy capabilities hard to find elsewhere, even in traditional centralized cloud environments.

You can check out this same thread for more info: https://twitter.com/lastmjs/status/1736088865541103617…

Medium probability of shipping because there seems to be some debate about its priority over Threshold Schnorr.

1 Like

Threshold EdDSA

If you’re not already aware, ICP currently has a threshold ECDSA signature scheme that allows it to custody in a more decentralized manner, Bitcoin and Ethereum private keys. This is one of ICP’s most useful/unique features IMO.

This allows ICP to provide general-purpose decentralized application capabilities to other chains that embrace ECDSA as their signature scheme.

Unfortunately (or rather, it’s just the way things are), ECDSA is not the signature scheme of every blockchain that might matter.

Some use EdDSA, for example: Solana, Algorand, Cardano, Stellar, Elrond, Waves, and others (according to this handy chart: http://ethanfast.com/top-crypto.html)

ICP as a platform for decentralizing the blockchain infrastructure of other chains is IMO one of the most powerful value propositions of ICP in the short-medium term.

Extending its capabilities to sign for some of these other chains may be warranted.

Threshold Schnorr

Schnorr is another digital scheme, used in blockchains like ECDSA and EdDSA, but might be especially relevant to ICP because of its utility in providing decentralized infrastructure to Bitcoin itself.

I’m not too deep into the benefits that Schnorr would provide, perhaps @BobBodily or others can chime in.

But basically, adding EdDSA and Schnorr to the threshold suite of ICP could increase its capability and potentially dominance as a premier solution for decentralized compute across many different blockchains.

1 Like

Robust decentralized identity/kyc solution

You might not like giving up your personal information, you might not like being surveiled by companies and governments, you might not like complying with what can seem like arbitrary or harmful laws…but sorry we live in a society and the rule of law is pretty important for its functioning.

That doesn’t mean we have to put up with the same low quality and harmful way of doing things. Decentralized compliance solutions could provide a very nice compromise, where our privacy risks are minimized and our freedom and autonomy is maximized.

Decentralized identity and kyc are very promising technologies that have been gaining some momentum in the last year or so.

Gitcoin Passport, Coinbase Verifications, Ethereum Attestation Service, etc are all examples of this.

@dfinity has started work incorporating verifiable credentials into Internet Identity.

With some combination of all of these technologies, I’m feeling rather confident we’ll have some compelling solutions going by the end of 2024.

You can read a tiny bit about Internet Identity’s plans for verifiable credentials here: https://forum.dfinity.org/t/verifiable-credentials-in-icp/24966…

P.S. it’s also not all about compliance, but proving unique personhood (sybil resistance) is just plain important in general

2 Likes

ckUSDC (ckERC20)

Do we need to go over again how important stablecoins are? I’m highly confident we will get bridged stablecoins on ICP in 2024. I imagine that ckUSDC will be prioritized, but others will probably follow relatively easily.

Once we have ckUSDC, we’re on a path to native issuance of UCDS from @circle. Of course it’s up to them in the end, but having a thriving ckUSDC ecosystem on ICP will help persuade them to eventually issue USDC natively, which is the best we can hope for from @circle and USDC.

Canister logging and monitoring

Canisters pushed to the production network have been somewhat difficult to monitor for errors, bugs, memory leaks, etc.

If we want ICP to be a compelling alternative to centralized cloud environments, we’re really going to need to be able to debug complex issues that may crop up in production.

At least for canister logging, I am very confident we’ll see some solutions out this year.

And as for monitoring, I haven’t thought too deeply about what is needed here and what currently exists to meet those needs, but monitoring live things like cycle usage and memory consumption seem pretty important.

We also need to figure out automatic top-up of canisters that are running low on cycles, using credit cards or other traditional payment rails.

1 Like

Canister backup and restore

It’s pretty scary to deploy canisters and upgrade them right now…every time you deploy new code, the heap is wiped and the canister is essentially restarted from scratch. Stable memory must be used to store data across these upgrades.

Stable structures and variables (Motoko?) help with this process…but what if something goes wrong? What if you try to migrate a stable structure or your stable memory and you mess up?

Canister backup and restore would help to alleviate these concerns. You can read up on the latest proposal for this here, it seems more in the design and discussion phase right now: https://forum.dfinity.org/t/canister-backup-and-restore-community-consideration/22597

3 Likes

Increase all instruction limits

ICP imposes a number of limits on how many Wasm instructions can execute depending on the type of call. You can see these limits here: https://internetcomputer.org/docs/current/developer-docs/production/resource-limits…

These limits are often too low! We hit them regularly, and can foresee hitting them much more especially once we get into complex database queries that rely on query calls for low latency.

Query calls have a 5 billion instruction limit currently, which is about 1 second of computation (very rough heuristic). Update calls are 20 billion instructions which is about 4 seconds of computation.

I’m telling you it’s not enough, it’s not enough for all of the many use cases we want to enable on ICP.

If we don’t remove these limits we’ll have to reimplement algorithms to chunk them across multiple calls, and the dream of pip/npm installing any package and having it just work will be very hard if not impossible to achieve.

2 Likes

Reduce call latencies (query calls, update calls, cross-canister calls, http outcalls, etc)

This is similar to the instruction limits, these latencies provide a general drain on the use case potential of ICP. The latencies are just too much sometimes, it makes for a general slow and sluggish experience that is just not as good as the Web2 alternative.

We really need to optimize these latencies…I have hope that we can cut them in half, but what’s scary is what comes after that. ICC (Internet Computer Consensus), similar to other consensus mechanisms, requires a two-phase commit that requires multiple rounds of communication to achieve consensus. This is difficult to optimize, there is an obvious floor on the latency.

We may need to embrace certain optimistic trade-offs to reduce latencies, they’re just not acceptable for what ICP is trying to achieve.

1 Like

Increase or abstract away all message size limits

Similar to instruction limits and call latencies, message size limits are a pain to deal with. Depending on what you’re doing, the limit is somewhere in the low MiBs…so if you try to upload anything of a reasonable size, like a 1 GiB file, you’re pounded with having to implement chunking complexity custom to your canister.

Asset canisters have some nice abstractions, as will dfx deploy soon for the Wasm binary, but this problem needs a general solution. We either need to get rid of the message size limits (difficult if not impossible), or provide more generalized abstractions.

2 Likes

Full Inter-Canister Query Calls

Right now a canister is allowed to query another canister with low latency given certain limitations. This opens the door for some use cases, but the limitations are not ideal.

It would be wonderful to have full unrestricted Inter-Canister Query Calls, bringing cross-canister queries to be a first-class communication method just like update calls are.

P.S. on full Inter-Canister Query Calls, this may be incredibly important to implementing extremely scalable databases in the future, as these DBs must be able to provide complex querying across multiple canisters for the ultimate scaling to be possible.

Atomic cross-canister calls

This one might be…impossible?

Just imagine if cross-canister calls were atomic. Imagine if at any point in a chain of calls, that if one thing went wrong that all of the state changes across all canisters were automatically cleaned up.

This would help enormously in implementing a scalable multi-canister database, and would probably solve a number of other issues and use cases as well.

The fundamental issue I believe is how to do this without sacrificing ICP’s scalability…but perhaps we can work towards this by using techniques similar to what Solana or other chains are doing (from my brief listenings-in), where state could somehow be separated out into isolated threads of execution, and the performance penalty perhaps limited in that way.

2 Likes

Ensure ICP is credibly neutral

Look I feel that one of the main weaknesses of ICP from a blockchain-focused angle is it’s lack of credible neutrality.

ICP is definitely on a journey towards decentralization. Various components are currently in various stages of decentralization.

But what is clear is that @dfinity has write access to the protocol, essentially allowing them to change anything about the protocol at will.

The process is very transparent though, and many ICP holders follow @dfinity (this has a debate because of genesis neurons and automatic following though), and @dfinity has a reputation and a lot of reason to not perform maliciously.

But that’s not really the point, it’s not about @dfinity or whoever else is currently the majority followee…to be credibly neutral I believe that the protocol should never allow one entity to have write access to the protocol.

ICP lacks checks and balances. It uses a simple liquid democracy that allows for massive centralization of power without much of a check on that power.

I would like to see various groups gain more power in the protocol, more stringent decentralization checks on proposals for passing, and…really just checks and balances, mutiple independent groups with power, we shouldn’t be allowed to vote in an all-powerful entity.

I would love to debate this more to get to some good solutions, I used to engage heavily on this topic but it seems unproductive, so now I’m focused more on just building and getting adoption for ICP.

I do think the issue of credible neutrality needs to be addressed before certain major projects will be willing to trust themselves to ICP, as the NNS is always the ultimate risk at the end of the day.

3 Likes

The reason why the heap can’t survive a code upgrade isn’t that the memory image could not technically be kept alive, the reason is that the memory image usually is incompatible with the new code – even the smallest seemingly innocent program change, or just a compiler update, can easily invalidate the complete memory layout.

Backup and restore cannot solve this problem, it is unsolvable in general.

Expect no magic here, the only adequate solution is that applications move towards a more traditional separation of short-term vs persistent data. That is, make sure that all permanent data is explicitly written to stable memory, like you would write to a file or database on a conventional platform. For that, we need higher-level libraries for managing data storage in stable memory. (Developing the right patterns for this might also go a long way towards dealing with non-atomic message rounds and explicit commits.)

2 Likes

But if something goes wrong in the slightest like you’re saying, and it’s detected after a deploy, wouldn’t it help to be able to restore everything back to a previous stable memory/heap and Wasm binary?

Perhaps, but when and how do you expect to discover that?

If you’re checking post-upgrade integrity systematically, through some kind of validation, then you should probably do that in the post-upgrade hook – in that case, failure can automatically trigger a roll-back to the pre-upgrade state. Nothing new is needed.

If, on the other hand, you discover a problem only incidentally, then likelihood is that it happens too late for a clean revert, since some state changes may already have happened in the meantime. True, it could still serve as an additional safety net for worst-case scenarios, but I suspect in practice it would be rather difficult to recover even with that.

(Of course, you should also take all the help you can get from the compiler, like, Motoko’s warnings about lossy changes to stable variable types.)

1 Like

Yes please meaningfully enter the race to be the technology behind CBDCs through private blockchain tech / subnet rental tech.

Perhaps can work with Swiss banks to launch a stablecoin with subnet rental.

See here, missing major opportunity to not throw hat in the ring. Big governments spending big money.

1 Like

Here is my wish list for ICP in 2024 after working with the stack for about a year, not trading, actually developing on it, first on Motoko and next on Rust.

The IC wants to become the crypto cloud, amazing, let DFINITY do this in 2024 please:

  1. Bring REST to the IC, modify Candid accordingly
  2. Bring Actix, one of the most powerful Rust frameworks to the IC. We should not be rebuilding this every time we do a DAPP.
  3. Give us a working file system. We should not emulate one, or build it from scratch. This is by the way a requirement for #4 below.
  4. Give us a way to upload files to the IC with 10 lines of code, no excuses. Python can do this easily.
  5. Port a major database to the IC and make it work first for one canister, and later for multi canister apps.

That would make me quite happy, I hope at least some of this get done.

Happy New Year everyone!

5 Likes

@dfinity, let’s make this happen. If not, what are the technical reasons preventing these features?

1 Like