How many shared instructions/computations per block per subnet?

What is the number of available instructions that can be shared by all canisters on a single subnet per block?

I’m thinking about this in the context of compute allocation. As in how many canisters could reserve compute allocation so that in the case where:

  • All of your canisters have a compute allocation of 100%
  • All other canisters on the subnet have the default compute allocation (0)

so that all of your canisters could be guaranteed to run in every block (and starve compute from other canisters on the subnet)

1 Like

It’s up to 4 * 7B instructions per round.

starve compute from other canisters on the subnet

The allocatable compute capacity is limited to (scheduler_cores - 1) * 100 - 1, which ensures progress for long executions (DTS) and best-effort canisters with zero compute allocation.

4 Likes

There’s 28B instructions per round, and each canister can execute 5B instructions per round/block. So I’d assume then at this point in time the most efficient use of a subnet would be 5 canisters executing at capacity (~25 million instructions per block).

Right now, most 13-node application subnets have a finalization rate of 2-2.5 blocks per second. So if a subnet is fully maxed out, I’d expect to see on the order of 50-62.5 million instructions executed per second (MIEPs).

If you look at these subnets however, you’ll see ~15-20 MIEPs, meaning that the subnet is hitting a limit at 40% of the expected value above.

Is my intuition here correct in calculating the max possible MIEPs? And why are MIEPs maxing out at around 20 MIEPs?

1 Like

From our other conversation. Obviously physical space and compute aren’t 1:1, but imagine the answer is closer to “it depends” than “only large canisters”.

Of course this likely depends on the answer to the question of “can 100 canister only using 0.28B instructions be scheduled in the same round?” I’d imagine there is a bit of overhead, but how much? And obviously the scheduler at some point has to say to itself " I don’t think we can pack much more in here".

1 Like

Where do you get the 5B instructions limit from? Without DTS, it seems like canisters can execute 2B instructions per round.

Ah, thanks for the link. I just previously saw 5B mentioned in several places in the Deterministic time slicing thread.

Also it mentions the 5B limit here Error Codes returned by Internet Computer - Internet Computer Wiki

1 Like

Yes, you are right. For a replicated query call the instruction limit is indeed 5B, for some reason I was only considering update calls :upside_down_face:.

each canister can execute 5B instructions per round/block

after a discussion with @berestovskyy it seems like there’s more to it. he’ll explain in a bit more detail once he finds the time.

but in short citing him

Basically, the explanation is here:

The reason why it’s so complicated is because the limit is “soft”. We start a new execution only if we still have at least 5B instructions left in the round. We limit each execution to 2B (updates) or 5B (queries).

Once we’ve executed 2B+ instructions, we have less than 5B instructions left in the round (7B - 2B+ = < 5B). So it’s not enough to start a new execution, and we finish the round.

To reach 7B we need: any executions up to 2B + 5B query (in one go)
To reach 4B we need: any executions up to 2B + 2B update (in one go)
Otherwise, executing small messages, we finish the round once we’ve executed more than 2B instructions…

This feels off. Why would queries be allowed more instructions than updates. 4X DTS provides update calls with 20B instructions, or 4 times the 5B. You’d need 10 rounds of update calls were limited to 2B per found.

Additionally, queries aren’t replicated so this is slightly confusing. Finally, aren’t there different cores in the replica that handle updates vs queries?

The rest about not including a new update message in the round unless there’s at least 5B instructions left makes sense.