Does a loop containing an inter canister call execute each iteration as part of a different block?

If I have a loop that makes an inter-canister call inside the loop, on every iteration, does every iteration execute as part of subsequent blocks?

The concern behind the above question is around exhausting the instruction limit when running the entire loop. But if they execute as part of separate blocks, then the entire loop will run to completion since the loop won’t go over the cycle limit.

My primary use case is upgrading my dynamic canisters with a newer WASM without having to worry about running into instruction/cycle limits

I remember reading that await calls are atomic which implies they go through consensus, hence my assumptions above :slight_smile:

Each await will likely see a different block. As far as how the accounting of the cycles goes, I’m not sure. I was under the impression that each ingress call got a certain amount of cycles to spend and that when they were gone they were gone…if awaits had occurred before the exhaustion then that state would be committed, but the execution would stop.

There was some discussion that you actually did get a fresh set of cycles after every await, but I haven’t confirmed it.

There is a third scenario where you do some awaits and end up with a bunch of committed state, but the something at the end exhausts the fresh set of cycles and you ultimately get an error with some committed state.

@claudio may have some insight.

1 Like

Inter-canister calls themselves do not commit or suspend the current execution, but await does(*). If you await inside a loop, then yes, every iteration will run as a separate method execution. Technically, that doesn’t necessarily mean that it runs in a different block, since the system may decide to still pull it into the current one if there is time. In fact, block boundaries are completely transparent to canisters, the only thing they can observe is the order of commits and method executions.

(*) You can initiate inter-canister calls without awaiting immediately, or at all. But the messages are only actually sent out once execution reaches a commit point, i.e., the next await or the end of the entry method. If there is a rollback due to failure, the messages since the last commit point are not sent.


So, my use case here is:

I have an index canister of sorts that has spun up a large number of canisters dynamically by sending requests to the management canister to create and install canisters based on an end user login. It also sets itself as the controller of these dynamically created canisters.
This index canister maintains the list of canister IDs in a stable data structure.
It also has the wasm for the dynamically generated canister embedded inside it as a byte string that gets updated with newer versions when it gets updated.

What I’m trying to do is, on a function call, loop through all canister IDs and upgrade them one by one. In case there is a failure, it notes those canister IDs to a separate list so that they can be retried/tried manually later. My concern in this case was since this loop could grow to be arbitrarily large, at some point this function call would exceed the allowed instruction limit in a single block.

So, essentially my question is, would awaiting those upgrade calls inside the loop mandatorily put them in separate blocks and hence I wouldn’t run into the instruction limit as mentioned above? Or is my concern invalid and I am free to not await those calls at all?

Yes, every commit point essentially resets the cycle limit.

FWIW, cycle limits apply to message executions. To what blocks those are scheduled is up to the system and completely irrelevant to the application. In general, you should forget about blocks entirely, they are an implementation detail of the IC and not relevant for anything as far as canisters are concerned.


Got it, thank you.

I was referring to commit points synonymously with blocks, but as you pointed out, blocks are an implementation detail that canister devs needn’t be concerned about.