Thank you for looking into this!
If quadratic is already optimistic, then it’s understandable.
However, as of the expected throughput requirements, I can give our estimated best/avg/worst case scenarios.
Please keep in mind that I am referring to peak requirements in high-demand periods, which we could easily face. Also keep in mind that I may still have a fuzzy understanding how it exactly works in ICP. The below is based on my current obversation of canister behavior vs. signatures:
We are implementing a journal within our canister that signs messages as well as unspents for transactions. The journal-length depends on the number of bitcoin transactions that are being processed:
best: 100
avg: 250
worst: 500
Based on our expectations, we will need to be able to process 283 journal entries in average.
Each journal entry may either process a single signature or multiple ones, signing n-unspents for Bitcoin transactions.
For simplicity, let’s assume we’d only have to process 1 signature per journal-entry and our canister being the only one across ICP requesting signatures. We also leave out utxo-consolidation that canister will need to handle.
Then this would take ~283 seconds to process (at best).
Since we are operating on Bitcoin blocks, which have an avg. block time of 10 minutes, this would be enough to fit in in theory.
Now imagine there are 2-3 popular canisters on ICP with similar throughput requirements: this would, as of my understanding how it works, lead to our canister not being able to catch up with the Bitcoin blocks we are processing in an consumer-friendly manner.
The risk is that at peak the journal takes hours or even days to catch up. While users expect their signatures and Bitcoin transactions being processed close to the current Bitcoin block height.
The same would apply for anything Ethereum related (when it comes to signatures at least) only that the block times are signifcantly shorter than Bitcoin’s.
Hope that helps understanding what the canister is supposed to do and why I feel it’s causing bottlenecks.