Motoko, Array, Memory

Main purpose of my tests is to predict the memory allocation in Motoko.

I am running local replica using following command: dfx start --no-artificial-delay --clean

Array of Nat8’s

I am creating an array of 1_000_000 Nat8 number, so I expect that is it 1_000_000 * 1 byte = 1_000_000 bytes = should be around 1Mb in memory.

Source code

Main part of the source code: var store: [var Nat8] = Array.init<Nat8>(initialSize, 0);

Execution steps:

  1. dfx deploy
  2. dfx canister call test getCanisterMemoryInfo
    Result:
(
  record {
    rts_max_live_size = 4_000_016;
    rts_memory_size = 8_192_000;
    rts_total_allocation = 4_000_140;
    rts_heap_size = 4_000_052;
    rts_reclaimed = 88;
    rts_version = "0.1";
  },
)
  1. dfx canister call test getSize
    Result:
(1_000_000)
  1. dfx canister status test
    Result:
Canister status call result for test.
Status: Running
Controller: rwlgt-iiaaa-aaaaa-aaaaa-cai
Memory allocation: 0
Compute allocation: 0
Freezing threshold: 2_592_000
Memory Size: Nat(BigUint { data: [8364189] })
Balance: 4_000_000_000_000 Cycles
Module hash: 0x4ae83d59334f4a4ca501742de29e580e1390a468225fa6db4b53310c80ef21b0

Observations:

  1. rts_memory_size is 8_192_000 bytes
  2. Memory Size from command line call is 8_364_189 bytes

So the main question is: why Array takes the same amount of memory as Array?

1 Like

Correct question:

1_000_000 big Array<Nat8> (source) and Array<Nat64> (source) takes the same amount of memory ~8Mb…
Why Array<Nat8> takes so much memory?

The array representation has to be uniform to support generics. Hence [Nat8] has the same memory layout as other arrays. If you want a compact bytes array (and don’t need random access or mutation), you can use type Blob.

(We’ve been discussing specialised array representations, but this hasn’t been a priority yet, and they would induce some overhead into array accesses.)

2 Likes

Thanks for a quick response.

Follow-up question:

Another test I have is here

Idea is to:

  1. Initialize the empty array: var store: [Nat] = [];
  2. Append a value to it in a loop:
Iter.iterate<Nat>(Iter.range(1, count), func(x, _index) {
    store := Array.append<Nat>(store, Array.make(x));
});

So I want to append 10_000 elements to the array.

Here are results:

  1. dfx canister call test appendData 10000
(
  10_000,
  record {
    rts_max_live_size = 41_212;
    rts_memory_size = 201_129_984;
    rts_total_allocation = 200_941_792;
    rts_heap_size = 41_248;
    rts_reclaimed = 200_900_544;
    rts_version = "0.1";
  },
)
  1. dfx canister status test
Canister status call result for test.
Status: Running
Memory Size: Nat(BigUint { data: [201303567] })

Observations:
Based on your reply - 10_000 elements should allocate 10_000 * 8 bytes = 80_000 bytes in memory, but what we see that now canister takes ~200_000_000 bytes

  1. Please can you shed light on rts_* values?
  2. Will the canister pay for all those 200_000_000 bytes per second?

Thanks

First of all, let me point out that constructing an array by repeated concatenation is not something you should do in production code. Since both operands need to be copied for each append, the cost of doing this is quadratic in the size of the final array.

Assuming you want an immutable array, a more appropriate way is to create the complete array as mutable, initialise it via mutation, and then freeze it.

As for the rts values: memory_size is the current size of the Wasm memory array (mostly the same as the canister memory size). This can only grow, never shrink in Wasm, even if most of it is unused. However, unused/unmodified canister memory ought to have no cost on the IC.

The other values come from here. The two interesting ones are probably heap_size, which contains the actual size of the current Motoko heap, i.e., the heap memory in use, and max_live_size, which is the largest heap size that has remained so far after a GC. The other numbers are accumulative total bytes allocated and collected so far, which probably is less interesting.

So your program really only uses 40K of live memory in the end. (An immutable array takes up 4 bytes per element.)

Edit: But the 200M intermediate memory size are a consequence of your loop and its quadratic cost. Before Motoko gets to run GC, you are allocating 10_000 arrays, with an average size of 5000 elements, i.e., 20K bytes. So the heap grew to 20K * 10_000 = 200M before GC happened at the end of the message. And of course you are paying for this temporary memory usage, just as for the cycles all the copying costs.

3 Likes

Thank you @rossberg

Immutable array - clear.

Lets say I have a mutable array of Nat’s.
I create it:

var store: [var Nat] = Array.init<Nat>(10_000, 0);

Then fill it with some values.
Later I would like to increase its size by N and fill with another values.

What would be the most efficient code to do that?

Arrays have a fixed size that cannot be increased. If you cannot preallocate an array of suitable size then you’ll need to create a new one and copy over.

Growable arrays are a useful abstraction, but you can build them as a data structure on top of primitive arrays, e.g., like vectors in C++. Something like (untested):

class Table<T>(n : Nat, init : T) {
  var it : [var T] = Array.init<T>(n, init);

  public func get(i : Nat) : T { it[i] };
  public func set(i : Nat, x : T) { it[i] := x };
  public func size() : Nat { it.size() }
  public func grow(m : Nat) {
    let n = it.size();
    let a = Array.init<T>(n + m, func(i) {
        if (i < n) { it[i] } else { init }
    });
    it := a;
  };
  public func setAnywhere(i : Nat, x : T) {
    let n = it.size();
    if (i >= n) { grow(2 * (i + 1) - n) };
    it[i] := x;
  }
}
1 Like

The Buffer library gives you something similar already, IIRC. src.

3 Likes

So if we are sending a big data structure from one canister to another we probably want to convert it to blob first? And the reconstitute it on the other end?

No, that’s not necessary. When sending a message, the data get’s serialised anyway, and the format is different (and generally more compact) than in the heap. Blob and [Nat8] actually get serialised to exactly the same wire format.

5 Likes

You have saved me 3 hours. May the karma gods smile upon you today.

1 Like

Bit late here - but when a canister is at rest which number is what we’re actually paying for on the IC? Heap, Memory Size, something else?

1 Like

as @rossberg said - I guess we are paying for rts_max_live_size

I interpreted rts_max_live_size as the largest size ever recorded. For instance, say we have something like:

func load (n : Nat) {
  foo := Array.tabulate<Nat8>(n func(_){123});
}

func delete () {
 foo := [];
}

The way I though it works was - Starting from nothing - If we call load(1_000_000) then

rts_max_live_size → 4_000_000
rts_heap_size → 4_000_000

Next, we call delete() which GCs at the end of the message

rts_max_live_size → 4_000_000
rts_heap_size → 0 + other memory bits…

Then if we call load(100)

rts_max_live_size → 4_000_000
rts_heap_size → 400 + other memory bits…

Borrowing chat text from another engineer here who knows (based on the public replica code):

We charge for executed instructions and dirtied pages after execution, but there is also storage fee that is taken independently from execution to prevent IC becoming a free permanent replicated storage for someone’s backups. The size of the fee is proportional to the total amount of memory occupied by the canister.

(@roman-kashitsyn wrote that in another context; borrowed with permission.)

Additionally, the GC of Motoko is a copying collector (for now, by default). That collector has its own policy for managing from/to space memory, for which you are paying cycles as well.

Some new things could mitigate that cost, and help control it:

  • Using another GC policy (new moc version supports a compacting GC, optionally, but I don’t know myself how that would impact that cycle cost. Certainly more work will continue to focus on improving the GC overhead in various ways with respect to cycle cost).
  • A new API for stable memory that permits a Motoko dev to manage memory manually, if they so choose. That’s still a draft idea, but it’s related to the larger memory management discussion.

Finally, my interpretation is that this is the relevant memory usage check (modulo what I say above about the copying collector).

5 Likes

Thank you so much @matthewhammer !

2 Likes

Thank you @matthewhammer
2 /infty & beyond!

Thanks Matthew. So to highlight something from his reply, my statement from my own answer earlier,

was not actually correct. The IC will still charge you a storage fee for the entire memory_size. I’m sorry for spreading wrong information before!

We have been discussing to extend the IC with some way for an application to signal that a certain part of the memory is “unused” (nulled out) and can hence be eliminated from storage (effectively, providing a sort of shrink through the backdoor). But that does not exist yet.

For Motoko that means that you should be even more careful not to create excessive garbage in a message, since that might temporarily blow up the heap size but Motoko has no way to “return” it.

2 Likes

@rossberg any news regarding “providing a sort of shrink for memory”?
I mean currently we have to pay for memory_size.
Is there a way to shrink this memory size?
Maybe canister upgrade will help? Like a restart of the PC :slight_smile:

There has been some recent discussion on discord that uploading a chunked blob is twice as fast as uploading Nat8 arrays. I’m currently writing some code to test this myself, but the theory is that allocating the array in motoko may be slower even if the serialization across the wire is faster. I’m assuming this is the same for xcanister calls but it would be great to get more validation.

1 Like