New and Improved Rust CDK: First-class support for Rust Canister Development

What’s the issue with chunking up the Wasm binary into multiple messages when first deploying? And will deterministic time slicing help (maybe that’s what you’re talking about)?

The question is whether this feature should work only for canister install messages or whether we want all messages in the system to be chunkable. Implementing chunking just for canister modules is not particularly hard, but it will need changes in a lot of tooling.

By the way, do you know what contributes to your canister size? 2MiB looks tiny for bloated modern software, but you can pack a lot of logic into 2 MiB. This article has some excellent advice: Shrinking .wasm Size - Rust and WebAssembly


I’m dealing with generated code for Azle and Sudograph, so proc macros and similar code generation type things. It’s all a bit unoptimized now, but there is still a limit that will be reached. Each project takes the user’s defined types and expands them to provide all sorts of functionality, so eventually that just adds up.

I think with my own optimizations plus gzip, we should be good for a while. Maybe for an easy path forward, could dfx automatically gzip? Otherwise it’s just another thing we have to add to the installation process outside of the dfx build system probably.


I think the gzip stuff should be promoted more widely, this is the first time I hear about this. I’ll open a PR on the docs repo.


I’m trying this out now. It seems to work when deploying to the IC, but does it not work locally? I am on dfx 0.9.3

That’s correct; the latest DFX has not yet shipped with a replica supporting gzip-compressed canisters. That’s one reason I haven’t announced the support for GZIP compression anywhere yet (except for this thread).

1 Like

What do you think about dfx just automatically gzipping the Wasm binary so the developer never has to do it manually?

1 Like

It depends on whether you care about build reproducibility. The replica exposes the SHA256 hash of the compressed module (it stores exactly the bytes that you’ve sent), and compression is not deterministic in general. I would not want automatic compression by DFX for canisters I work on: we use a custom build procedure anyways; we could just as well add a compression step there (1 extra line).

@roman-kashitsyn I just upgraded from dfx 0.9.3 to 0.10.2-btcbeta.0, and now locally my Wasm files are too big. I’ve tried deploying the gzipped versions just like I’ve been doing in production and like you explained here, but they don’t work locally on the new version of dfx.

So 0.9.3 let me install Wasm binaries locally over 2mb, but 0.10.2-btcbeta.0 doesn’t and it won’t let me install a gzip version with dfx deploy. Looking here: Large web assembly modules | Internet Computer Home it says that dfx deploy isn’t supported for gzip Wasm files.

Why is that? It was really nice just changing the file extension in dfx.json, now we’ll have to do a command with a manual path to the binary every time. Will dfx deploy be supported in the future?


So, if you want to test performance without waiting for a response.

Use the latest cdk-rs 0.5.2 and install DFX 0.10.1.
Next open your dfx.json and set it to 0.10.2-btcbeta.0 and start your dfx start.
Then switch your dfx.json to 0.10.1 and build/deploy with the gzipd file.

Im getting an error also with the gzipd file on 0.10.2-btcbeta.0


It’s all working for me, I just want to be able to use the .wasm.gz files with the dfx deploy command is all

1 Like

@roman-kashitsyn I’m working on Kybra, a Python CDK, and unfortunately I just ran into the Wasm binary size limit again, even with gzip. When I include the stdlib for Python using the RustPython VM, the binary after optimization and gzipping is ~10mb and it gets rejected. I hope I can get around this by optimizing somehow, but here I am reaching the limit again as I’m trying to implement the same tech we have on Web2.


I believe the next step will be implementing a chunking protocol for the canister module installation. I created an IC feature for that in May, but there has been a lot of more urgent stuff on the execution’s team plate (DTS & scheduling, firefighting & squeezing perf, bitcoin integration, etc.).


I have being using ss uploader:

It creates chunks of the wasm module, works very well!


You’re able to initialize a canister with this? How large can the Wasm binary be? My understanding is that there’s still a ~10mb limit if you chunk the Wasm, is this true?

Is there a public ticket/issue that we can follow? And FYI if the limit can be increased significantly it would unlock a major blocker for Kybra, which is allowing the inclusion of the entire Python stdlib: stdlib · Issue #12 · demergent-labs/kybra · GitHub

1 Like

@roman-kashitsyn Is it true that there is a 10mb limit on cross-canister messages? Could you then install a canister with a Wasm binary up to 10mb in size if you first chunked the Wasm binary into a canister, and used that canister to call install on the management canister?

I thought I read this somewhere in the past, but I’m having a hard time finding that information.

1 Like

Currently, the inter-canister message limit within a single subnet is 10 MiB, while the limit on messages between subnets is 2 MiB.

This setup is very problematic because it breaks the transparency of XNet communication. Generally, canisters should not care about the destination’s subnet when they make the call, except for this difference in max message size. This difference also blocks subnets splitting work: separating canisters can break people’s code because they might be sending large payloads, and separating them will break the code.

I’m having a hard time finding that information.

Yes, the size difference is not documented anywhere, mainly because it’s a horrible hack that we don’t want people to use. I’m not sure why we raised the limit for inter-canister messages in the first place, my hunch is that someone really wanted to install a large canister :face_exhaling:


Sory for the late message. The code I have mentioned create chunks so you can upload a large wasm file.