It means the entire protocol, defined as:
- A single participant’s proving times
- A single participant’s verification times for all proofs by all participants
- The actual calculations over the signature
Complete in that time, on a single 2GHz core (which also has had a myriad of architectural optimizations disabled).
It should be much further reducible (~0.5-0.8s), yet I’m quite happy with that time due to the real world performance of a contemporary for a much smaller signing set. With a few threads and the boost clock enabled, I’d expect it to hit a quarter second of computational time and be only beat by OT protocols (though none have been posited with linear complexity).
The two-round protocol I sketched should be much faster as it only has a single proof in the first-round, with the second round’s proofs not needing verification if the signature is valid (a much cheaper check). Those proofs only serve as useful for identification. Even if that needs a hash commitment in an additional round, hashes are of negligible performance cost in this discussion.
(And yes, this protocol’s third round proofs can be similarly delayed. The 3s benchmark was including them regardless)
You’re welcome to run the code for yourself to have the times printed out. cargo test --release
should be so sufficient.
EDIT: I was a bit nervous at having to justify 3s, as 3s is quite atrocious when you forget how atrocious tECDSA itself is. From Proposal: 50x Price reduction for chain-key ECDSA signing / threshold ECDSA signing however,
the baseline being the current throughput of 0.5 signatures / second.
Considering this should jump 4x just by using gmp instead of malachite, that’d cause it to achieve 1.25 signatures/second without any of the techniques proposed to further accelerate the existing mechanism (parallelism) as an experiment (not yet productionized). These numbers likely aren’t directly comparable though (inconsistent reference hardware, my number being computational and ignoring bandwidth/latency, yet I believe the protocol superior on latency (I’d have to double check bandwidth yet would presume it as well)).
Edit 2: I just realized the 0.5 sigs / second number is likely for the current subnet which is <1/5th the size of the set my number was evaluated for. There’s definitely too many variables in play to do any off-hand comparison, yet I’d love to see a proper one.