What is the theroretical number for txns per second on Internet Computer right now

How would these numbers translate to an individual canister? I’m trying to figure out the practical performance limits in queries/sec and updates/sec for individual canisters.

Hi @lastmjs,

This question is hard to answer, as it depends mostly on your canister code.

In the experiments we have been running, there is actually only one single canister per subnetwork. So the numbers we have reported can be achieved just with a single canister.

The reason for this is that the bottleneck in our experiment is the consensus throughput, which doesn’t depend on where ingress messages go.

Depending on your canister, the bottleneck might also be the canister code itself. There can only be one update call to a single canister at the same time. So an upper bound on the number of updates/s naturally is given by that.

For example, if a single update call takes 10ms to execute, you will not be able to execute more than 100 updates/s on that canister (100 * 10ms = 1s).

I just checked our internal metrics and it appears that the large majority of update calls during the last couple of days have completed within 1ms, suggesting that for many canister is should be possible to achieve close to the above mentioned 900 updates/s when the subnetwork is otherwise idle.

Hope that helps!

2 Likes

What is the time for a no-op update message, i.e. how long does it take just to spin up the wasm VM?

I have done tests on mainnet and gotten 450 tps through to a single canister. It’s tricky though to submit them at this high rate, and at a constant rate and to multiple boundary nodes in parallel. Above that I saw timeout errors. But at 450 tps they all got processed, no errors. The updates were short though, maybe around 10k cycles each.

And they were anonymous calls. I couldn’t sign fast enough to make signed requests on the fly.

3 Likes

I cannot give you precise numbers right now, but we do see a lot of update calls with a duration of less than 100 µs on mainnet.

I know this is not exactly what you asked, but that should be an indication that system overheads are rather low.

1 Like

Re creating load at a given request rate, you might want to look at:

A lot of what you want to achieve should be possible out of the box. The only (somewhat) difficult part is to run against multiple boundary nodes, as you would need to use multiple workload generator and hope that they get different IP addresses for the boundary node DNS entries, which is only really the case if you deploy them on different geographic locations.

Happy to help you with concrete benchmarking requirements :slight_smile:

2 Likes

Realistically by how many orders of magnitude do you expect that number to increase? While 900 updates/s per subnet might currently be considered as performant in the blockchain world, it really isn’t that much, especially taking in account the tradeoffs made to achieve it.

If the IC truly aims to be a viable alternative to centralized cloud and eventually lead to a “blockchain singularity”, it needs to provide much higher throughput and while it can already be potentially achieved by spreading the load on different subnets, scaling horizontally is limited by data dependency and cross subnet messaging latency and comes with an increased cognitive load for the devs, which is the opposite of what Dfinity initially advertised the IC would do.

Financial exchanges process hundreds of thousands of updates/s per trading pair due to HFTs and arbitrageurs, that is more than the tps of all existing subnets combined! Am I missing something or are there improvements planned which will substantially increase throughput?

3 Likes

One obvious thing to do is to have subnets that are more localized.

The rate of ingress messages that can be processed largely depends on how fast we can create blocks and that in turn depends on the geographic distance between nodes (i.e. latency). That’s because we need to give nodes enough time (multiple round trips) until we can be sure all nodes have seen all artifacts that go in the block.

If we would spin up subnets that are say in the US only, or Europe only, we could probably with the current technology already configure a much higher block rate, since less time is required until all nodes have seen all artifacts.

The IC way of doing consensus is not really that much slower theoretically vs “web2” (3f + 1 instead of 2f +1), but our system is much more decentralized.

Those this make sense?

1 Like

Very helpful thank you!

1 Like

Where are these results published?

But wouldn’t that partially defeat the purpose of deterministic decentralization and reduce the “sovereignty” of those subnets? If they are hosted on nodes in the same jurisdiction that would negatively impact the degree of decentralization of the dApps on those subnets, even more so than the low node count.
This would be rather problematic for financial exchanges, on top of it there would need to be sync systems between regional shards or users in far away countries will have increased latency resulting in a worse UX. In terms of updates what are the potential gains such a solution could bring?

All in all I’m not a huge fan of this solution, it seems to compromise even more on the initial vision proposed by Dfinity for performance gains which might not even bring it significantly closer to our end goal.

1 Like

Where are these results published?

We don’t publish our weekly performance numbers yet, but are planning to do so once we fine some time to brush up the dashboards in a way that makes sense externally. A lot of the benchmarks we are running are quite complex, and while their results make sense to the team internally (who developed them) a lot of extra explanation needs to be added so that they make sense without that context for external consumption.

1 Like

Yes, such subnets would definitely be less decentralized and you would not want to use them for financial transactions.
They might make sense for a lot of other applications though.

My point was, that in general, if you want to get close to the speed of web2, you can do so relatively straight forward on the IC as well. However, you will loose some (but not all) of the benefits of the IC. Also, that subnet will still be less centralized than a web2 cloud provider, since close-by data center could still be operated by different entities.

There are probably also optimizations we can do in the protocol to increase the throughput of ingress messages without sacrificing decentralization, but those are less straight forward and therefore likely a little bit further in the future.

2 Likes