New and Improved Rust CDK: First-class support for Rust Canister Development

I think we can add new utility cargo-did or cargo-candid which responsible for both build and generating did file.

1 Like

The size limit is likely a constraint from the canister execution environment. So I’m afraid we may not be able to bypass it in Rust CDK.
Let me confirm it with my colleague in execution team.

2 Likes

Thanks for the suggestion. We will carefully consider the options and provide consistent and ergonomic tools.

1 Like

Any update on this? Getting that 2mb limit higher is extremely important to Azle/Sudograph

1 Like

Let me ping @lwshang

Hi @lastmjs !

The only improvement that I have in mind is GZIP-compressed canister modules.
I implemented them a couple of weeks ago (I found quite an elegant implementation that is backward-compatible with all the existing tooling). I can confirm they work on the mainnet:

$ curl -o ledger.wasm.gz -L https://download.dfinity.systems/ic/0618091c39002acee22d507bc7d3e79c0f173ba8/canisters/ledger-canister_notify-method.wasm.gz

$ shasum -a 256 ledger.wasm.gz 
27dd88e6070e0081e90e6a98c8d264dc205fe14f7c115b8df0acf3903cb826e7  ledger.wasm.gz

# Install the ledger on the mainnet...

$ dfx canister --network ic info 4d3iv-hqaaa-aaaag-qaf6q-cai
Controllers: 7czmi-pyaaa-aaaag-qaciq-cai jlcmz-cojlk-zdm46-mshzl-dtlre-ricph-khpzu-tqxrk-qo3ow-7jdsw-tae
Module hash: 0x27dd88e6070e0081e90e6a98c8d264dc205fe14f7c115b8df0acf3903cb826e7

Let me know if gzip compression helps in your case. Note that there is still 10MiB limit on uncompressed canister size.

7 Likes

I’m sure this would help, 10MiB is much better than 2. Is this already live and in dfx? It would be nice if the dx didn’t have to change.

And can you shed light or point me to info on why there is such a low limit?

1 Like

Yes, I installed the compressed ledger above using DFX 0.8.3, which is rather old.

{
  "canisters": {
    "ledger": {
      "type": "custom",
      "candid": "ledger.private.did",
      "wasm": "ledger.wasm.gz"
    }
  }
}

You mean the 2MiB limit?

I believe the primary reason for this limit is that we want to keep consensus blocks relatively small for better latency and efficiency. Reasonable message sizes are also important for XNet communication performance.

The ultimate solution would be to chunk large messages into pieces and allow partial message transfers. This feature introduces a lot of complexity and requires quite a few changes in the system, it is not scoped yet.

2 Likes

What’s the issue with chunking up the Wasm binary into multiple messages when first deploying? And will deterministic time slicing help (maybe that’s what you’re talking about)?

The question is whether this feature should work only for canister install messages or whether we want all messages in the system to be chunkable. Implementing chunking just for canister modules is not particularly hard, but it will need changes in a lot of tooling.

By the way, do you know what contributes to your canister size? 2MiB looks tiny for bloated modern software, but you can pack a lot of logic into 2 MiB. This article has some excellent advice: Shrinking .wasm Size - Rust and WebAssembly

5 Likes

I’m dealing with generated code for Azle and Sudograph, so proc macros and similar code generation type things. It’s all a bit unoptimized now, but there is still a limit that will be reached. Each project takes the user’s defined types and expands them to provide all sorts of functionality, so eventually that just adds up.

I think with my own optimizations plus gzip, we should be good for a while. Maybe for an easy path forward, could dfx automatically gzip? Otherwise it’s just another thing we have to add to the installation process outside of the dfx build system probably.

3 Likes

I think the gzip stuff should be promoted more widely, this is the first time I hear about this. I’ll open a PR on the docs repo.

3 Likes
2 Likes

I’m trying this out now. It seems to work when deploying to the IC, but does it not work locally? I am on dfx 0.9.3

That’s correct; the latest DFX has not yet shipped with a replica supporting gzip-compressed canisters. That’s one reason I haven’t announced the support for GZIP compression anywhere yet (except for this thread).

1 Like

What do you think about dfx just automatically gzipping the Wasm binary so the developer never has to do it manually?

1 Like

It depends on whether you care about build reproducibility. The replica exposes the SHA256 hash of the compressed module (it stores exactly the bytes that you’ve sent), and compression is not deterministic in general. I would not want automatic compression by DFX for canisters I work on: we use a custom build procedure anyways; we could just as well add a compression step there (1 extra line).

@roman-kashitsyn I just upgraded from dfx 0.9.3 to 0.10.2-btcbeta.0, and now locally my Wasm files are too big. I’ve tried deploying the gzipped versions just like I’ve been doing in production and like you explained here, but they don’t work locally on the new version of dfx.

So 0.9.3 let me install Wasm binaries locally over 2mb, but 0.10.2-btcbeta.0 doesn’t and it won’t let me install a gzip version with dfx deploy. Looking here: Large web assembly modules | Internet Computer Home it says that dfx deploy isn’t supported for gzip Wasm files.

Why is that? It was really nice just changing the file extension in dfx.json, now we’ll have to do a command with a manual path to the binary every time. Will dfx deploy be supported in the future?

2 Likes

So, if you want to test performance without waiting for a response.

Use the latest cdk-rs 0.5.2 and install DFX 0.10.1.
Next open your dfx.json and set it to 0.10.2-btcbeta.0 and start your dfx start.
Then switch your dfx.json to 0.10.1 and build/deploy with the gzipd file.

Im getting an error also with the gzipd file on 0.10.2-btcbeta.0

Rick

It’s all working for me, I just want to be able to use the .wasm.gz files with the dfx deploy command is all

1 Like