It clearly states that private key are never reconstructed.
I assume that the private key that would have been generated in the first stage before being distributed as secret shares no longer exists, but how is it destroyed? I assume they are securely deleted without being reconstructed or restored, but I am curious.
I read the White Paper thinking that this applies not only to threshold ECDSA signatures, but also to threshold BLS signatures, but there is no mention of this, so I am asking.
It clearly states that private key are never reconstructed.
The private key never lives in one place, not even temporarily, hence does not need to be destroyed. There is a process called distributed key generation (DKG) which that it is generated in a distributed way, in other words it is “born distributed”. That applies to both ECDSA and BLS threshold signature keys.
5 doesn’t help much with the collusion attack vector which is what worries me most. The Bitcoin integration is live, and ckBTC is about to be live. Assuming the tECDSA subnet has 34 nodes (can anyone confirm?) Then 12 (11?) node operators can collude to steal everything built on top of that master tECDSA key (right?).
12 known parties, who know each other and do not rotate their membership on a regular basis, can do this.
We’re going to try storing $millions or more here?
Update: I’m trying to identify the tECDSA subnet. What is its id? I’m looking through the subnets on the dashboard and there are only a few higher-replication subnets to choose from, the NNS, the one fiduciary, and one other system subnet that is identified as the II subnet.
The unmarked system subnet Subnet: w4rem-dv5e3-widiz-wbpea-kbttk-mnzfm-tzrc7-svcj3-kbxyb-zamch-hqe - IC Dashboard only has 13 nodes, but it seems most likely to be the tECDSA subnet. Which is it?
I’ll update this new thread with the relevant information once known: tECDSA subnet id and takeover threshold
Fiduciary subnet pzp6e is the ECDSA signing subnet, currently consisting of 28 nodes.
Is that 9 or 10 independent nodes necessary for a complete takeover of the master key?
I think it’s 10, 3f + 1 = 28, f = 9, f+ 1 = 10 necessary for takeover.
Exactly, 10 required for takeover.
Not really any worse than Solana or BSC, pragmatic. I think this is safe enough as long as there is key refresh, could be made better with TEEs.
If it is just “not worse” than a traditional bridge, plus it has the added risk of a new technology never used in production, what is the advantage?
Is there something to compare this to apples to apples? It is certainly better than one entity(Coinbase, Binance) controlling your BTC, but are there really other non-custodial solutions that we should be using for comparison?
Getting 10 doxed companies to collude in theft is a pretty damn high bar. I’d think the other options have non-doxed which would be much less safe depending on the replication factor.
Obviously on the other end is bitcoin itself and we won’t meet that threshold of security, but what is “enough”?
So if we had a traditioal bridge secured with 10 way multi-sig security and the 10 key distributed among 10 doxed companies, wouldn’t that provide the same level of security without the risk of using a new technology never used in production?
Sure. The same level of risk, but not the same level of interoperability, computability, and scalability opportunity. You’ve got to use it in production at some point to overcome your argument. Are you proposing never turning it on because there are risks? Certainly, no one should move their whole stack over on day 1. Don’t risk what you’re not willing to lose and eventually the platform will have secured X million dollars of bitcoin for Y number of days and you’ll have a new production-level risk floor.
I’m trying to understand what is the final beneffit once we reach the perfect production quality level.
Probably true but It is too generic. It does not say how.
Maybe the “how this new technolgy help reach the higher level of interoperability, computability, and scalability opportunity” is by eliminating the dependency on humans as key keepers? If this is correct, why not exaplaining things by focusing on this unique key benefit?
I think the theory/protocol is very sound, but the current practical implementation limits the actual security in practice.
I am going to start feeling much more comfortable when subnet replications increase, the number of independent node operators increases, we implement node rotation, and have secure enclaves.
The best I can think of for measuring “safe enough” for subnet size is the size of Chainlink oracle networks. Chainlink is one of the most trusted and well-known live production projects in blockchain, and AFAIK secures $billions.
I’m not sure which Chainlink oracle networks are securing what value, but if we look at their data feeds the first two, ETH/USD and BTC/USD are probably the most widely used. Check them out: https://data.chain.link/
They each rely on a 21/31 assumption. So their subnets have 31 nodes, and 21 must come to agreement before a trusted answer can be created. Seems they have a fault tolerance similar to IC subnets.
But the tECDSA subnet has essentially half of the fault tolerance. So if we extend Chainlink’s security model to tECDSA, we would need a subnet of 61 nodes to ensure that it would take 21 colluding node operators to steal everything.
Combine a 61 node subnet with node rotation, secure enclaves, and a healthy level of independent node operators and I think we’re reaching quite fantastic levels of security.
I wish DFINITY would put more effort into studying and attempting to quantify sufficient levels of decentralization, no one seems to have any idea what levels are required. Their old consensus paper from 2018 had math explaining probabilities of corruption from colluding attackers based on committee and total population sizes, it was really great stuff. But that seems to have been thrown away.
Hasn’t Timo shown that rotation actually increases the possibility of corruption because there is a higher probability that a group will eventually get rotated than enough in any one particular group will become corrupted? I thought I saw a post on that somewhere.
That concern has been brought up multiple times, though depending on your assumptions I don’t think the concern plays out. Myself and a few others have dug into the math as well and we can get it to work depending on the underlying assumptions.
I think most people including @Manu are on-board with rotation now, but it probably doesn’t make sense until there are many more independent node operators (I would guess that was one of the assumptions, the pool of node operators to choose from, the percent honest/dishonest, and the size of subnets).
It’s this kind of math and analysis I would like to see from DFINITY, in addition to all of the other very high quality research they publish.
This thread has some of the math in it: Shuffling node memberships of subnets: an exploratory conversation
Oh wait, maybe the math didn’t show what I thought it showed…see for yourself I suppose. I’m not convinced it’s a bad idea, and the fact that others whose opinions I highly regard on the matter seem to agree then I am very much still on board with node rotation.
If we have a conclusive analysis that it’s dangerous then I’ll be done with the idea, but I highly doubt such an analysis would end up that way considering the fundamental importance of shuffling in other designs e.g. Ethereum.
The benefits of reshuffling depends on: size of the pool to pick from, size of the subset randomly chosen and time before reshuffling. In the original Dfinity design, which was heavily based on VRF and consensus on random subsets of nodes, the pool was in the order of thousands, subset around 400/500 nodes and reshuffle would happen every block, I believe Eth 2.0 sharding model is similar in this aspect.
Here are Dfinity’s own estimates for the original design:
With the current design: the pool is still quite small, the subset is 30/40 nodes and reshuffles happen 1-3 to times a day, so it might not do much to increase the subnet’s security, infact it could open a new attack vector as Manu stated in the other thread.
Thus node shuffling/rotation is most likely only viable once we have 100s or 1000s of independent node operators to choose from.
Yeah, the 2018 whitepaper is what brought me to the IC, not sure 100% how it changed.
Why would Link have half the IC’s fault tolerance, should be the same no?