Question about Chain Fusion Signature Limitations - Will This Become a Bottleneck?

I’ve been following the amazing Chain Fusion developments and trying to understand how all these integrations work. From what I’m reading, ICP uses threshold signatures for Bitcoin (ECDSA), Ethereum (ECDSA), and now Solana (EdDSA) integrations, but I’m seeing some concerning numbers that I’d like to understand better.

What I Think I Understand

From various forum posts and documentation, it seems like:

  • ICP can process about 1 signature per second for cross-chain transactions
  • There’s a 20 signature queue limit per subnet
  • Every Bitcoin transaction needs 1 signature per input
  • Every Ethereum transaction needs 1 signature
  • Every Solana transaction needs 1 signature

My Concerns as a User

With all these exciting developments happening:

  • KongSwap is integrating Bitcoin, Ethereum, and Solana
  • 1sec bridge and other cross-chain applications are launching
  • More Bitcoin ecosystem apps are being built
  • ckETH and ckERC-20 integrations are expanding

I’m wondering if we’re heading toward a situation where these applications will compete for the same limited signing capacity?

Simple Math That Worries Me

If I’m understanding correctly:

  • 1 signature/second = only 86,400 cross-chain transactions per day maximum
  • If KongSwap gets popular and processes even 1,000 cross-chain swaps daily, that’s already consuming a significant portion
  • Add in bridging activity, Bitcoin apps, and other integrations…

Am I missing something here? Will I end up waiting minutes or hours for my cross-chain transactions to process if these apps become popular?

Questions I Have

  1. Is this actually a limitation or am I misunderstanding how it works?

  2. Are there plans to increase this capacity? I saw mentions of 10x improvements but no clear timeline.

  3. How will this affect user experience as more Chain Fusion apps launch?

  4. Should I be concerned about transaction delays in the future?

  5. Are there workarounds that applications can use to avoid this bottleneck?

I also saw something about an IKA.xyz team mentioning 10,000 TPS threshold signatures - is that something that could help ICP?

Just Want to Understand

I’m really excited about Chain Fusion and what it means for the ecosystem, but I want to understand if there are infrastructure challenges I should be aware of as a user.

Are my concerns valid, or am I overthinking this? Would love to hear from anyone who understands this better than I do.

1 Like

I would imagine the bottleneck could be resolved with more subnets?

1 subnet per 1 signature? It doesnt seems like a good way to scale to me.

Not 1 signature per subnet… just more subnets. Atm I believe there’s 1 or 2 subnets generating keys, correct me if I am wrong… So add and utilize more subnets dedicated for key generation. Its like any cloud provider. Computation is sharded across multiple networks communicating with each other.

TL;DR: Your concerns are valid, but the system has headroom, and improvements are already in the pipeline. We’re nowhere near the ceiling yet, and the infrastructure is designed to grow as demand does.

Are my concerns valid, or am I overthinking this? Would love to hear from anyone who understands this better than I do.

You’re absolutely right to be thinking about this. Right now, the signing subnet on ICP handles roughly 0.55 ECDSA sigs/sec and about 1.1 sigs/sec for BIP340 and EdDSA (each). That’s the baseline today, but there’s a lot of flexibility built in.

Is this actually a limitation or am I misunderstanding how it works?

The system isn’t capped in a rigid way. There are various subnet configurations that can be already tuned today to increase the performance, but so far there hasn’t been much need—current demand hasn’t pushed the limits. Moreover if usage will spike, keys could be reshared to multiple subnets and requests could be load-balanced between all signing subnets. That’s one of the benefits of ICP’s horizontal scaling.

There’s a 20 signature queue limit per subnet

This is not a hard constraint and could be easily increased. The reason we initially proposed a low number is to provide a better user experience. Ultimately this affects how long you may need to wait for your signature, so it may be preferable to immediately receive an error so that you can retry later, rather than keep waiting for a long time.

Are there plans to increase this capacity?

There are concrete plans to improve capacity! Work on this was paused for a bit to focus on launching vetKeys, but now that it’s out, we’re picking it back up. We’re targeting two main improvements: boosting maximum throughput, and improving how well the system handles bursty demand. While raw throughput is important, in practice the service tends to sit idle most of the time and then gets hit with short bursts of requests. Right now, it doesn’t handle those bursts as efficiently as it could, so there’s a lot of room for improvement there.

Just to give a bit more insight into what we’re working on. These threshold protocols are composed of two phases:

  • Offline precomputation phase: This is where nodes generate pre-signatures. It’s the computationally expensive part of the protocol, but it can be done ahead of time, without knowing who’s signing or what the message is.
  • Online signing phase: This happens when the actual request comes in. If a pre-signature is already available, this step is much faster than computing a presignature.

Ideally, when the system is idle, nodes would build up a stash of pre-signatures so they’re ready to handle bursts of incoming requests quickly. But once that stash runs out, nodes have to go through both phases in real time, which slows things down significantly.

Right now, there’s a limitation: pre-signatures are stored in consensus blocks, which restricts how many we can stash, currently capped at 5 per threshold key. This isn’t a hard limit and it could be increased, but doing so would eventually affect the finalization rate, since larger blocks take longer to process. As a result, in a burst scenario, only the first 5 signatures are handled quickly using the precomputed stash, while the rest have to wait for new pre-signatures to be generated in real time. The current maximum throughput numbers on mainnet reflect this scenario when no pre-signatures are available.

I also saw something about an IKA.xyz team mentioning 10,000 TPS threshold signatures - is that something that could help ICP?

My understanding is that those numbers are theoretical results. If you look at this paper, which proposes heavy optimizations to the same threshold Schnorr protocol used by ICP, it claims up to 50k sigs/s and with a large committee. We plan to eventually integrate some of those optimizations, but to make those improvements practical may require some work. Moreover, the IKA model seems quite different than the one we have. See also this thread for some comparison.

4 Likes