I had to do an upgrade today and I hit the default 3.2 max memory limit. I upped it a bit and still no dice. After sweating a bit and swapping out some Tries for Maps-v9 I figured out that snapshots were live(Thanks @berestovskyy and team! you made gave me much more peace today than I’ve had in past upgrades) and felt free to mess around a bit more.
Finally, I just upped the max memory to over 4GB:
dfx canister --network ic update-settings my_canister --wasm-memory-limit 4021225472
This worked but I saw some strange things.
Settings before upgrade:
Memory allocation: 0
Compute allocation: 0
Freezing threshold: 2_592_000
Memory Size: Nat(2364426472)
Balance: 3_593_596_313_527 Cycles
Reserved: 0 Cycles
Reserved cycles limit: 5_000_000_000_000 Cycles
Wasm memory limit: 3_821_225_472 Bytes
Module hash: 0x1bb62d741135b86b5df1ed58eec7300964757556311113aa2ac76c3619ea48b0
Number of queries: 2_440_269
Instructions spent in queries: 20_232_971_890_820
Total query request payload size (bytes): 5_742_780_745
Total query response payload size (bytes): 58_511_363_261
Log visibility: controllers
Note: I was a bit confused by the 2.3GB as we do have a ton of files in here but by my count, they should only be about 200MB.
During upgrade:
Memory allocation: 0
Compute allocation: 0
Freezing threshold: 2_592_000
Memory Size: Nat(5451802398)
Balance: 3_482_936_096_948 Cycles
Reserved: 0 Cycles
Reserved cycles limit: 5_000_000_000_000 Cycles
Wasm memory limit: 4_021_225_472 Bytes
Module hash: 0x427c7279b53457544fb421461299a79f6e7580eda842563895e36d988b67e345
Number of queries: 2_440_269
Instructions spent in queries: 20_232_971_890_820
Total query request payload size (bytes): 5_742_780_745
Total query response payload size (bytes): 58_511_363_261
Log visibility: controllers
Note: How did the memory get to 5451802398 when my limit was 4.0xxxx GB?
Now that it is running it is the same with an over 4GB Memory
Status: Running
Controllers: 5vdms-kaaaa-aaaap-aa3uq-cai a3lu7-uiaaa-aaaaj-aadnq-cai ahx36-fo6xi-5exvo-4xqpz-hyafh-54vpt-tor6v-tz2xh-vwfgx-kwc5u-6qe osqd3-qbnmc-cgcvo-mwxag-a5pcv-27dad-k67vk-ed45o-4yqe7-mvbaj-yae
Memory allocation: 0
Compute allocation: 0
Freezing threshold: 2_592_000
Memory Size: Nat(5485356830)
Balance: 3_481_216_290_053 Cycles
Reserved: 0 Cycles
Reserved cycles limit: 5_000_000_000_000 Cycles
Wasm memory limit: 4_021_225_472 Bytes
Module hash: 0x427c7279b53457544fb421461299a79f6e7580eda842563895e36d988b67e345
Number of queries: 2_440_295
Instructions spent in queries: 20_237_981_170_901
Total query request payload size (bytes): 5_742_787_947
Total query response payload size (bytes): 58_511_383_385
Log visibility: controllers
I ran the new motoko metrics(Very Very cool motoko team!) and I’m trying to figure out how I got to such a big allocation.
(
record {
heapSize = 395_138_868 : nat;
maxLiveSize = 391_987_088 : nat;
rtsVersion = "0.1";
callbackTableSize = 256 : nat;
maxStackSize = 2_097_152 : nat;
compilerVersion = "0.12.1";
totalAllocation = 3_781_262_476 : nat;
callbackTableCount = 0 : nat;
garbageCollector = "incremental";
reclaimed = 3_386_123_608 : nat;
logicalStableMemorySize = 1 : nat;
stableMemorySize = 5_225 : nat;
memorySize = 3_959_422_976 : nat;
},
)
A heap size of 400MB is much more inline with what I’d expect. But why such a huge memory Size? Did I have an old upgrade that ran and was big and now every upgrade it just fudges a bit to make sure it has room in stable variable space in case the whole allocation is full of Motoko stable variables?
Finally, is there any way to shrink the allocation?
Second finally, will that callbackTableCount give me outstanding calls keeping my canister from shutting down?