Updating that we’ll have the first community conversation next Tuesday, November 29th (8 AM PT / 5 PM CET) - registration is here!
This will be the first of (at least) two community conversations. Here, the plan is to give the conceptual overview of what we’d like to do, give some context and use cases, and to clarify any questions you may have. Think of it as an opener to the topic in general.
We’ll then schedule another conversation later to discuss a design proposal.
So come one and come all - with questions, queries, wishes, or just to hear a little crypto chat. Can’t wait!
Is it possible this could provide an alternative to tECDSA? If a single ECDSA private key can be threshold encrypted, stored on the IC, and only decrypted on a client device, then couldn’t users store any kind of keypairs on the IC? We wouldn’t even need to wait for EDDSA or any other schemes…not sure this would work for all use cases, but am I thinking of this correctly?
Changing principal id after a secret is leaked isn’t easy. The only way seems to be asking a user to register a new II account. On the other hand, if an arbitrary secret can be stored / updated, it would be less hassle.
That’s a very good point, you can use vetKD to threshold-encrypt any data on the IC, including an ECDSA private key. However, the security of doing so is not exactly the same as for threshold ECDSA.
You’re right that the security level against a key recovery attack based on the data “at rest” is similar for tECDSA and vetKD-encrypted ECDSA: you need to corrupt a threshold number of nodes to recover the private key. An important difference, however, is that in the vetKD-based solution, the full ECDSA private key would have to be reconstructed in the user’s browser, where it may get exposed to malware etc running on the user’s machine. With threshold ECDSA, the full private key is never reconstructed on any machine.
The identity in (master_key_identity, identity) does not have to be the user’s principal or anything related to it. It can be any string; it’s up to the canister to decide which user/principal gets access to the decryption key of which identity string.
In the scenario that I sketched above where a user encrypts a user-chosen secret with a derived key, if the secret is leaked, the user can simply encrypt a new secret under the same derived key (i.e., for the same identity string). If the derived key was also leaked, the user could switch to a different derived key (i.e., for a different identity string). But in neither case would the user have to change his principal; that would only be needed if the secret key to the principal also leaked.
Unrelated, but not sure where else to ask – is there an eta on the secure logins? Gone plenty of times of losing my notes due to the lack of this kind of feature. It would be amazing to have.
@gregory@ais - Thank you for the excellent community conversation! I just watched the replay, and I wanted to say that it clears up a LOT more than just reading this post.
One quick question - how do you envision on-chain encryption in a world with AMD SEV nodes? Do they solve for the same problem, i.e. “defense in depth”? Or do they actually solve for different problems?
I am struggling to think of a use case where on-chain encryption is still needed when subnet nodes run AMD SEV.
For example, is it accurate to say that AMD SEV only encrypts a node’s memory but doesn’t encrypt data at rest on disk?
If a user stores a confidential file unencrypted in the stable memory of a canister running on a AMD SEV-enabled subnet, the node can’t read the canister’s memory contents but perhaps it can read the on-disk snapshots generated from that memory? Is that where on-chain encryption can help?
Sorry, one last question: I am curious how on-chain encryption compares and contrasts with the Signal Protocol for E2E encryption. It seems more flexible and more efficient, but I would like to hear your thoughts on it. Thanks!
AMD SEV doesn’t encrypt data on disk by default, but it does have the option to do so. Obviously, if the IC is to use SEV in the future, it will make use of the disk encryption feature.
In that sense, AMD SEV indeed solves the same use case as threshold key derivation, even in a more powerful sense because SEV nodes can perform computations on the cleartext data.
The big drawback of SEV are the trust & attacker model. First, SEV requires you to trust the centralized entity of AMD. If AMD decides to implement a covert channel into its chips, or if AMD’s root key leaks, all bets are off. Second, the security offered by SEV isn’t all that great. It doesn’t protect against attackers with physical access to the machine, for example, as has been demonstrated by a cheap and practical attack.
With threshold key derivation, you don’t have to trust any single entity, you just have to trust the threshold assumption on the subnet where the master key is hosted. And it does protect against attackers with physical access to the machine, because they can only see encrypted ciphertexts.
That’s a very good question. You can indeed implement Signal-style E2E encryption on the IC already right now. That would involve managing decryption keys in user devices though, e.g., by scanning QR codes to admit new devices. It’s possible, but threshold key derivation makes that job much easier, as it can piggy-back on the device management that is already included in Internet Identity (or any other IC identity provider), so that the only logic that the canister has to implement is to decide which principal is given access to which derived key.
On top of that, Signal-style protocols can’t solve those use cases where the canister itself derives a key to decrypt data, e.g., time-lock encryption, secret-bid auctions, or MEV prevention.
The problem with any secure enclave technology as I understand it is that they are, as far as we know, currently fundamentally broken by being susceptible to side-channel attacks.
At best they just make it harder to decrypt the data.
@gregory is any of this AMD-SEV stuff even worth doing in light of that paper you referenced? Does it improve the confidentiality of canister data materially?
That’s a good question. My personal take on AMD-SEV is that it can add a “best-we-can-do” level of security to those features that we have no idea to (efficiently) solve otherwise, in particular, by means of cryptography. Large-scale computing on confidential data could actually be one of those features. Fully-homomorphic encryption is definitely more secure, but will never scale as well as AMD-SEV does.
But you’re right that AMD-SEV doesn’t add all that much security, so I would also recommend against relying on it for critical features. Note that if we did, we could probably get rid of most of our consensus protocol.
No timeline decided yet. We’re still drafting up a first design to include in a motion proposal, if that gets through things will depend on prioritization. It should be considerably less work than threshold ECDSA, though, because (1) the vetKD protocol is much simpler and (2) we can piggy-back on our experience from tECDSA.