Wasm64, and the expansion of heap memory past 4GB

First off, I’d like to congratulate the team on releasing wasm64 to mainnet at the end of 2024! :tada:

As the post mentions, after the wasm64 release, per-canister heap based memory is still limited 4GB. Many developers haven’t migrated to using 64-bit wasm yet, as much of the underlying data will double in size when moving from 32-bit to 64-bit.

I’ve heard through the grapevine that the plan is to raise heap memory to 6GB, 8GB, and then beyond, but I expect the big migration of developers from 32-bit to 64-bit memory will happen when 8GB of heap memory is available to canisters.

What are the expected timelines for reaching 6GB and 8GB of heap memory?

What can we expect to see 2 months from now, by mid-2025, and by end-2025 in terms of heap memory expansion? Is the timeline similar to the original expansion of stable-memory, or are there additional caveats to consider?

My personal interest in asking the questions above is that I’d like to store more fine-grained data on-chain in my Motoko canister, but want to leave enough breathing room in order to navigate around data migrations and potential surges in application usage.

For reference, my application currently stores ~200MB and I’d ideally to increase the historical data kept by a factor of 72. This means I’d be storing ~ 14GB with these adjustments. However, given ICP’s strong developer growth last year, I’d like to keep a ~5X storage buffer most of the time and know that 70GB+ heap will be available in the future, or at least know the expected rate of heap size increases.

8 Likes

One major difference from stable memory is that we can allow large stable memory, but still limit the amount of stable memory accessed in a given message. We don’t currently have the ability to limit per message heap memory access. This means if a few canisters are concurrently running and access a lot of heap memory they risk putting the node under memory pressure.

So when it comes to allowing much larger heaps it will be longer since it’s not just a question of bumping the numbers up.

3 Likes

Looking at the current sandboxes across subnets, I’d guess that the memory distribution of canisters is such that you have a just a few canisters with a significant amount of memory, and then a long tail of many canisters with small amount of memory.

I’d imagine this memory distribution is in part to be expected, but is also an artifact of the 4GB heap limitation and work done in projects before the expansion of stable memory & now Wasm64. Now, many applications such as KongSwap are returning to single canister architectures that experience more traffic and utilize larger amounts of memory. And if heap memory limit is raised in the coming months :crossed_fingers: , I’d expect to see more applications use fewer canisters, but use larger amounts of memory and compute per canister.

While it sounds like extending heap memory to 100+GB requires significant work, what would the immediate impact of bumping canister heap memory up to 8GB be? Would we see an issue on any subnets, and what tuning/tweaking might need to be done afterwards? Are there any blockers for a simple doubling of canister heap memory?

On the motoko side, if we’re really gonna do a restructuring of base library and build some robust collection structures, this would be a good time to take a look at strategies for processing through very large heaps without having to access the equivalent amount of memory.

2 Likes

I don’t think we can answer that with certainty at the moment. The biggest risk is probably that if a lot of messages are executing and touching all 8GB of memory then it causes long rounds and significant degradation in subnet performance (e.g. lots of ingress messages timing out as with the BoB incidents a few months ago). The first step is to test scenarios like this to get an idea of the impact.

1 Like