Wasm module exceeding maximum allowed functions

Fair enough. I suppose the next best thing would be to at least suggest this in our documentation in case users encounter such issues.

@borovan I would be curious to see what happens if you use the IC CDK optimizer on your wasm binary. Actually, it should be quite similar because it’s using the same underlying binary I think.

1 Like

That’s just using binaryen right? I dont think I can take any more wasm tonight hah. Need booze.

If you compile an empty Motoko file you’ll probably see a lot in there before you even start adding your own code.

Some info on why is here:

1 Like

Have you looked at SudoDB/Sudograph?

1 Like

I suspect that with a little bit of work you might be able to define loadMany, loadRange, create, and update, etc. once and then provide the entity as an argument to them.

That would reduce the number of functions significantly, and it wouldn’t continue to grow as you add more entities.

Something like this:

import Time "mo:base/Time";

type Proxy<A> = {
  #Proxy;
};

module Rarity {
  public type Entity = {
    // ...
  };
  
  public type Metadata = {
    createdAt : Time.Time;
  };
  
  public type Record = {
    name : Text;
  };
  
  public let proxy : Proxy<(Entity, Metadata, Record)> = #Proxy;
};

module Other {
  public type Entity = {
    otherEntityField : Text;
  };
  
  public type Metadata = {
    otherMetadataField : Text;
  };
  
  public type Record = {
    otherRecordField : Text;
  };
  
  public let proxy : Proxy<(Entity, Metadata, Record)> = #Proxy;
};

// Fake Query implementation
class Query<Entity, Metadata, Record>(/*caller : _, store : _, path :_*/) {
  public func loadMany(ids : [Nat]) : [Entity] {
    return [];
  }
};

func loadMany<E, M, R>(proxy : Proxy<(E, M, R)>, ids : [Nat]) : [E] {
  Query<E, M, R>().loadMany(ids);
};

let rarityEntities : [Rarity.Entity] = loadMany(Rarity.proxy, [0, 1, 2, 3]);
let otherEntities : [Other.Entity] = loadMany(Other.proxy, [0, 1, 2, 3]);

Or forget about the wrapper and update Query to take the proxy instead.

I had a look on that but for our use case is way too unevolved.

We have a lot of moving parts like custom validators based on multiple inputs with complex logic behind it, sanitisers, multiple transaction types.

As an alternative to wasm-opt, you can also try this tool:

I think it uses the Rust walrus library to actually do the optimization.

Not sure about how well tested this is though.

One caveat with all these optimizers is that they won’t optimize any actor classes used by your code (since the wasm for those classes is embedded in the client, and won’t be recognized as wasm by the tools).

In the long term, our best be would be to tightly integrate one of these tools into the Motoko compiler, or implement our own tree-shaking pass.

1 Like

Thanks, yeah this is awesome. Already started on a solution like this and we’re down around 3300 funcs with the optimiser.

Just a bit of a shame that the number of functions seems to be a much lower barrier than anything else… we’re ok for now anyway.

2 Likes

@dsarlis based on the error I’m getting locally for an application subnet it looks like this limit was increased to 10,000.

I didn’t know I was anywhere near this limit until I tried adding pre/post upgrade hooks and could no longer deploy.

I’m using Rust and my output is heavily optimized already so the only way I’ve been able to get under the limit is by using a different allocator.

I could try and do some refactoring but I’m not sure if that would even help. I do think it would make the code harder to work with though.

I was hoping to get by with a single canister to start but I doubt I can make any more changes without needing to break things up into multiple canisters.

Do you know if anything has changed around the thinking on this limit?

1 Like

Hey, so what ended up happening with us is that we moved (slowly) to a schemaless DB and also started splitting up the data between the canisters. Took a while, but here’s what our canisters are doing now :
image

handy little script if anybody wants it :slight_smile:

Anyway, thanks for all the guidance, we’re in a good place now.

2 Likes
#!/bin/sh

TMPFILE="/tmp/opt.wasm"

stats() {
  SIZE=`wc -c $1 | cut -f1 -d ' '`
  FNS=`wasm-objdump -h $1 | grep Function | awk -F ' ' '{print $NF}'`
  ELEMS=`wasm-objdump -h $1 | grep Elem | awk -F ' ' '{print $NF}'`
  printf "%-22s: %10s bytes %6d fns %6d elems\n" `basename $1` $SIZE $FNS $ELEMS
}

# wasm-opt
for FILE in `find ./.dfx/local/canisters/ | grep "wasm$" | sort`
do
  wasm-opt -O3 $FILE -o $TMPFILE
  mv -f $TMPFILE $FILE
  stats $FILE
done
2 Likes

We’ll take a look at seeing if we can remove the limit and instead charge Cycles for the increased execution time caused by having many functions defined in a module.

7 Likes

@borovan thanks. I have a similar script so that I can fail fast instead of waiting until I try to deploy to find out I’ve hit the limit. I’m wary that the limit might change though.

@abk thanks. It would be a good stop-gap solution for those of us who aren’t aware there’s a limit and then suddenly run into it.

Before trying a different allocator (which seems more risky than I’d like) I was able to get down to 10,008 functions.

So long as the increased fees were proportional to the number of functions I would have been happy with that trade-off.

1 Like

After looking into it some more, it seems like we can safely raise the limit without charging more cycles. So we’ve bumped it up to 50,000 functions and that change should be in the release for next week.

8 Likes

@abk do you know the timing around this?

1 Like

It looks like this didn’t make it.

Subnets are being updated to 999f7cc6bbe17abdb7b7a1eab73840a94597e363 (bottom commit) but this change came after that (top commit)

Yeah, I should have been clearer - meant it’ll be in the release that gets elected/blessed (not rolled out) this week.

2 Likes

Hey, @abk at Demergent Labs we’ve just started working on our Python CDK called Kybra. The RustPython VM we’re using seems to be pretty heavy, we’re at like 13,000 functions without optimization.

We really need this limit to be increased, and it sounds like it will be very soon on the live IC. But what version of dfx should I expect to see the limit increase in? We’re basically stuck unless the optimizers work, because these functions are coming from the most basic usage of the RustPython VM.

2 Likes