In a real-world scenario, how is the dealer index assigned to each participant?
Itâs up to the consensus layer to determine who participates in a given instance of the IDKG protocol. At the moment we only have two cases: the participants of IDKG are either all members of the same subnet or 2 distinct subnets (i.e. the dealers from one subnet and the receivers on a different one). As part of the consensus protocol, nodes in a subnet agree on a registry version (which is included in a block) that specifies the members of all subnets at a given time. Whenever a subnet initiates an IDKG protocol all nodes construct an IDkgTranscriptParams
, which is used to agree on the set of dealers and receivers as well as other general info. The receiverâs index corresponds to their position in the sorted set of all receivers node ID. The dealerâs index is only relevant when we either do a resharing of a previous transcript, or the product of 2 previous transcripts. This is equal to the position each dealer had (as a receiver) in the previous transcript, i.e. the one being reshared or multiplied.
For every dealing that has been posted on chain, P_i decrypt_and_check his part
Small clarification: most of the protocol actually happens off-chain to reduce latency. E.g., dealings are not individually put on chain. What goes on chain is the final transcript, which includes the set of dealings that must be used. So the complaint phase can only happen on a dealing included in a transcript. Note that also the complaint phase happens entirely off-chain.
I donât see the FixBadShares implementation
It is part of the same library. The fix bad share protocol requires the following sub-routines to:
- Issue a complaint, implemented by the
generate_complaint
function.
- Verify a complaint, implemented by the
verify_complaint
function.
- Force-open a dealing, implementing by the
open_dealing
function.
- Verify the opening of a (forced-open) dealing, implemented by the
verify_dealing_opening
.
- Compute the secret share using the openings for bad shares, which is implemented by the
compute_secret_shares_with_openings
.
If the result is matching, then we go to the ACS part
Not exactly. The complaint phase would only happen after the ACS. Before agreeing on the set of dealing to use (i.e. agree on a transcript), receiver only sign share that pass private verification. If a dealing does not pass this, the receivers wonât sign it. Even though a dealing does not privately verify for a certain receiver, it could still verify for all the others. Therefore it is possible such dealing could end up in a transcript. Itâs only after they agreed on a transcript that the receiver may issue complaints about specific dealings.
using a threshold bls sceme [âŠ] on post that signature on chain.
Actually the âsupportâ signature the receivers apply to the dealings is just a standard ed25519 signature, not a BLS threshold signature. At some point we used BLS multi-signatures, however the cost of the signature verification shares is one of the bottlenecks of the protocol, and verifying one BLS sig is about 2 order of magnitude more expensive than ed25519. Signatures are not individually posted on chain (again this is to minimize latency). First they are broadcasted to the other nodes, then anybody who has collected enough support shares can then âaggregateâ them into a certificate. Eventually, a blockmaker will have enough certified dealings to construct a transcript and they can include this in their block proposal, and to add this to the chain.
when do we know that we have enough verified dealers?
I think this is now mostly answered from the above, but let me summarize it again. Only the final transcript goes on chain and all other artefacts are just broadcasted to all the participants. The nodes collect the various dealings, the signature shares, and combine these signatures into certificates. Once a blockmaker has enough dealings and certificates, then they can propose a new transcript by including it in a block. The other nodes would only accept this if the dealings and certificates are valid, and if it contains enough dealings. The minimum number of dealings to be collected is defined as the collection_threshold
which can be computed, for example, from the IDkgTranscriptParams
. After a transcript is included in a finalized blocks, the node would not try to collect any more dealings and would not accept a new transcript (for the same instance of the protocol).
Why do we need to do this? Why canât we just run the random protocol?
This explained, a bit informally in the docs here. The paper describes a dedicated OpenPower sub-protocol, which is used to reveal the public key of a key generated via the Random protocol. This is needed, e.g., because during signing you need to know the public key corresponding to the secret. However, we decided not to implement such protocol and instead to reuse the reshare_of_masked protocol. The resharing protocol reveals the public key, exactly as in the OpenPower protocol, but it also enables the resharing of the secret key. The main reason behind this was to minimize the number of sub-protocols used in IDKG and to simplify their orchestration. Note that the resharing protocol is needed anyway, e.g. in case of a membership change in the subnet.
Why relax it, if it introduce more communications rounds, and less security?
We donât do any trade off of security here. The hiding property is important during the generation of the key, e.g. not to introduce biases on the public key (which could actually lead to some attacks). Once the key is generated it cannot be changed and therefore we can change commitment scheme to an unmasking one. The only thing that is now revealed is the corresponding public key, which is exactly what it would be revealed by the OpenPower protocol. In fact, the same commitment scheme is used in that sub-protocol.
How do we choose k? There are also other optimizations that I have not understood yet, particularly the DispersedLedger AVID optimization. Where is that part implemented in the repo?
I think the paper specifies what is, asymptotically, the best choice of k. In practice, we would need to evaluate the concrete latency/throughput tradeoffs to set the k value. However note that these have not been implemented yet. We are currently still in the process of evaluating what is the best strategy to improve the concrete efficiency of the protocol.