Canister ic3o3-qiaaa-aaaae-qaaia-cai exceeded the cycles limit for single message execution

I was uploading mp3 files to an asset canister for the latest episode of my podcast Demergence, and I ran into an error I had never run into before when uploading assets:

Deploying all canisters.
All canisters have already been created.
Building canisters...
Installing canisters...
Upgrading code for canister assets-0, with canister_id ic3o3-qiaaa-aaaae-qaaia-cai
Authorizing our identity (lastmjs-mercury-0) to the asset canister...
Uploading assets to asset canister...
  /demergence.rss 1/1 (2869 bytes)
  /episodes/demergence-0.mp3 (34071925 bytes) sha 345dc9e344be6abc5d47db303a4a3ebb7291dd61b8aefdcf7898e5b6cf8e32a9 is already installed
  /episodes/demergence-1.mp3 (33805254 bytes) sha 573a8d271f3a02113c657d19e23f873de6dcc9744abeb66b029bb233b24f0120 is already installed
  /episodes/demergence-2.mp3 1/36 (1900000 bytes)
  /episodes/demergence-2.mp3 2/36 (1900000 bytes)
  /episodes/demergence-2.mp3 3/36 (1900000 bytes)
  /episodes/demergence-2.mp3 4/36 (1900000 bytes)
  /episodes/demergence-2.mp3 5/36 (1900000 bytes)
  /episodes/demergence-2.mp3 6/36 (1900000 bytes)
  /episodes/demergence-2.mp3 7/36 (1900000 bytes)
  /episodes/demergence-2.mp3 8/36 (1900000 bytes)
  /episodes/demergence-2.mp3 9/36 (1900000 bytes)
  /episodes/demergence-2.mp3 10/36 (1900000 bytes)
  /episodes/demergence-2.mp3 11/36 (1900000 bytes)
  /episodes/demergence-2.mp3 12/36 (1900000 bytes)
  /episodes/demergence-2.mp3 13/36 (1900000 bytes)
  /episodes/demergence-2.mp3 14/36 (1900000 bytes)
  /episodes/demergence-2.mp3 15/36 (1900000 bytes)
  /episodes/demergence-2.mp3 16/36 (1900000 bytes)
  /episodes/demergence-2.mp3 17/36 (1900000 bytes)
  /episodes/demergence-2.mp3 18/36 (1900000 bytes)
  /episodes/demergence-2.mp3 19/36 (1900000 bytes)
  /episodes/demergence-2.mp3 20/36 (1900000 bytes)
  /episodes/demergence-2.mp3 21/36 (1900000 bytes)
  /episodes/demergence-2.mp3 22/36 (1900000 bytes)
  /episodes/demergence-2.mp3 23/36 (1900000 bytes)
  /episodes/demergence-2.mp3 24/36 (1900000 bytes)
  /episodes/demergence-2.mp3 25/36 (1900000 bytes)
  /episodes/demergence-2.mp3 26/36 (1900000 bytes)
  /episodes/demergence-2.mp3 27/36 (1900000 bytes)
  /episodes/demergence-2.mp3 28/36 (1900000 bytes)
  /episodes/demergence-2.mp3 29/36 (1900000 bytes)
  /episodes/demergence-2.mp3 30/36 (1900000 bytes)
  /episodes/demergence-2.mp3 31/36 (1900000 bytes)
  /episodes/demergence-2.mp3 32/36 (1900000 bytes)
  /episodes/demergence-2.mp3 33/36 (1900000 bytes)
  /episodes/demergence-2.mp3 34/36 (1900000 bytes)
  /episodes/demergence-2.mp3 35/36 (1900000 bytes)
  /episodes/demergence-2.mp3 36/36 (244436 bytes)
  /index.html 1/1 (1530 bytes)
  /index.html (gzip) 1/1 (565 bytes)
  /demergence.png (28290 bytes) sha 9f33c93b78d0c93f6aa81c4e25f3b5c254e52d143c8e179de34f05946b65b68b is already installed
The Replica returned an error: code 5, message: "Canister ic3o3-qiaaa-aaaae-qaaia-cai exceeded the cycles limit for single message execution."

The previous mp3 files I have uploaded were 34.1MB and 33.8MB. This file was 66.7MB. I am not sure why I would run into this error when the sdk is chunking the upload already. I would love to see this fixed. For now I can downgrade the audio file size.

1 Like

I’ve alerted folk internally…

3 Likes

I assume you use a fresh version of dfx.

The assets canister recently went though a complete rewrite to support certification feature: it adds IC-Certificate header that allows the requester to check the authenticity of the contents. All the requests to <subdomain>.ic0.app go through a service worker that does the validation. Having this build into the default assets canister allows devs to benefit from this feature without having to understand how certification works.

When all the chunks of an asset are uploaded, the new canister recomputes the SHA-256 checksum of the whole asset to validate it against the one specified by the caller before inserting the hash into the certification tree.

It’s hard to know for sure (canister debugging and profiling features are still very rudimentary), but my guess is that recomputing that hash for a 66.7MiB file is too expensive to do in one go.

I see a few ways to solve this problem:

  1. Unconditionally trust the SHA-256 hash provided by the caller and trap if it’s not set. This might lead to hard to debug issues, but is probably safe as long as all the uploading is done by DFX.
  2. Make the code a bit smarter and spread the hash computation over multiple rounds of execution.

I’ll create an issue to handle this case.

Note that certification requires the whole asset to be downloaded by the service worker before it’s validated, so if you go through the validated <subdomain>.ic0.app path and not <subdomain>.raw.ic0.app, using large asset files might result in poor user experience.

9 Likes

Thanks for the detailed response. I’m currently using <subdomain>.raw.ic0.app for everything, as unfortunately the whole service worker validation thing seems to have been a bad user experience for me thus far.

1 Like

I suspect this is why the latest episode cut off just as you started talking about security. Not sure if I got a cached copy before you tried to upload a smaller version.

1 Like

I just managed to re-download and it went from something like 31m36s to 51m17s

2 Likes

Interesting, that is strange to me because I wouldn’t think the asset canister had uploaded everything if it failed…perhaps it is not atomic? I thought it would either upload all of the files or fail

Deployment of assets is an atomic operation: if the commit fails, the incomplete assets aren’t used for serving.

1 Like

I just ran into this error. I was deploying hundreds of separate files. Maybe 50 MB total deploy size. It would successfully upload all static assets, but then in the final cryptographic hash computation it would fail (or whatever happens after uploading all static assets). And it would use a lot of cycles with the deploy (maybe 7.5T cycles per deploy attempt).

Is this a “too many files” and “too big of a deploy” kind of problem? To solve it, I had to deploy multiple times, iteratively adding files as I went.

1 Like

I had the same problem as @bob11, It was due to one mp4 file that was 87.7 MB. Uploading a smaller file of less than 2 MB fixed the issue.

When I uploaded many small svg and small js files, I also noticed my cycles being burned. Did not hit the limit but, definitely saw the cycles going down. Therefore I would guess it is a “too many files” issue.

I also got the error “exceeded the cycles limit for single message execution”
I posted multiple [Nat8] arrays with around 1 MB of data. The limit came around 200 MB and is easily reproducible.
Is there a configuration for this cycle limit?
I think this limit makes much sense in security critical applications where you would like to block brute forcing attacks and so on. However, when uploading images, the limit is achieved enormously fast.

Thanks a lot for any advice!
@ diegop

1 Like

Can you use Blob instead?

I tried it but it results in the same error. I’m wondering how the cancan project did it. As they do the upload stream also with [Nat8] array chunks. However it would be great to know the formula behind the function cycle restrictions. Maybe there is a way to omit it. E.g. by sending smaller chunks?

This restriction may be lifted once deterministic time slicing is implemented. I would love a DFINITY engineer to give us more insight into what deterministic time slicing will do for the cycle limit

2 Likes

Just ran into this error.Exceeded the cycles limit for single message execution.

What is your canister doing with the arrays when it receives them?

My guess is that your canister is storing them in a growing data structure and after you store 200 of such arrays the garbage collection becomes so expensive that it exceeds the cycle limit. That’s why you have to store them as Blob and not as [Nat8]. For the garbage collector a 1 MB Blob is one object whereas a 1 MB [Nat8] is 1 million objects.

The easiest is to right away receive the arrays as Blob.

No. The limit is system wide.

4 Likes

At the same limit or at a different limit?

I find it surprising if true and would like to understand what your canister does with the Blobs.

1 Like

Ah that might be it, makes perfect sense! I stored them still as Nat8 Array. I will try to also store them as Blobs then!

Many thanks for the help!

If easy to arrange (eg. Blobs are never deleted and don’t need fancy memory management) you could also store the blobs directly in stable memory using the ExperimentalStableMemory library and get them out of the way of the GC altogether.

This would also make upgrades less likely to run out of cycles.

1 Like