Optimal upload chunk size

A couple of aspects you may want to keep in mind when uploading content to a canister in chunks:

  • As already pointed out above, and somewhat obviously, you want to have multiple (a few, not many) requests in flight at a time. If you make a request to upload a chunk and wait for it to complete before making the next request, you are very much limited by roundtrip latency (i.e. 2 MB every few seconds) instead of block size (4 MB/s).
  • Given that the block size limit is 4 MB and there will virtually always be something or other in it (sometimes with higher priority than ingress messages; sometimes simply other ingress messages that get selected before yours) going with a 2 MB payload means that you will virtually always be limited to one payload per block (because you cannot fit 2 * 2 MB plus change into 4 MB). So either go with something just below 2 MB; or some other amount that, when multiplied, adds up to just below 4 MB. How far under 4 MB is hard to say, as it depends on the subnet and load; and more, smaller payloads are likely to make better use of whatever space is left, but will likely cost more than fewer, larger payloads.
3 Likes

Thanks, @yrgg and @free, for your valuable inputs! I now have a clear understanding of what I want to focus on improving. To summarize, I’m aiming to optimize my solution by dividing it into approximately 10 parallel chunks uploaded at a time, each with a size slightly below 2MB, around 1.9MB.

1 Like

You don’t have to, hundreds of other devs will <3

Would it be feasible to have it integrated in agent.js as a standard?

Also for the official dfx asset canister there’s an npm package to upload and manage your assets: @dfinity/assets - npm

I plan to do some maintenance on the package soon since the dfx asset canister implementation has changed between now and the last time I worked on the package.

If you use a different canister for assets, feel free to checkout the code of the package to e.g. see how data is chunked and sent in parallel.