Collections and occupied memory

My question is can the collection (in my case hashmap) occupy the same amount of memory in the canister after it is cleaned? Let’s say I cleared it by assigning hm := HashMap.HashMap<K,V>(0, equal, hash); but it looks like the data remains in the canister. How to clean up completely, by calling delete in a loop, or are there options?

This is expected: the canister as such can only grow it’s memory size, but not shrink it. So when you use a big value and then drop it, the canister memory size does not shrink again. Instead, the freed memory will be used for the next values. In other words, the canister size won’t grow for a while.

This holds for all canisters (Motoko and Rust).

5 Likes

OK thanks. As for the cycles, how is the cost deducted if the space in the canister has been increased and is not actually used?

The system can’t tell whether the space is used, that is internal to the canister.

In essence you are paying for the high water mark of memory consumption since the last canister upgrade.

2 Likes

Can vaule of (rts_heap_size + 196608) byte describe the rts_memory in time?

1 Like

Let’s say I want to re-partition a canister every time I hit a certain capacity filled in my canister. This wouldn’t work if I can’t accurately identify the amount of memory used in the canister, as one would have to wait for the previously freed memory to be completely overwritten and exceeded, which repeated a certain number of times would result in hitting the canister memory limit.

Are there any system APIs or algorithms one could use to somewhat accurately identify amount stored in stable memory.

One solution I can think of is each time a new record/data item is stored, increment a byte counter based on the data types/size, and then to decrement after splitting the canister based on the data deleted/moved. This means one would have a relative estimate of remaining memory in the canister, but it’s definitely error prone and would require testing of the data structures used as well (i.e. I’ve heard HashMap can use up a lot of memory).

An alternative would be to just spin up completely new canisters each time you want to scale out, transfer the data from the old canister and split it amongst the new canisters, and then spin down the old canister. (This is now a distributed systems problem) :sweat_smile: