Increased Canister Smart Contract Memory


Currently, Canister Smart Contract stable memory storage is capped due to Wasm limitations at 4 GB per Canister. To improve scaling, a new system API will be offered to Canisters that allow them use as much memory as available on their subnet (currently 300 GB).


  • Community approved the design via NNS
  • NNS proposals with code updates coming soon

What you can do to help

  • Ask questions
  • Propose ideas


Proposed design to review and vote on.

Key people involved

@akhilesh.singhania @dsarlis @stefan-kaestle @ulan


Relevant Background

Currently, a canister on the IC has two types of storage available to it:

  • A wasm heap which is constrained to 4 GiB because currently wasmtime does not support the wasm64 specification and hence has 32 bit addressing.
  • A stable memory which is also currently constrained to 4 GiB because it too only has 32 bit addressing.

So a canister can under normal conditions store 8GiB of storage. However, when a canister is upgraded, its wasm heap is wiped so for all practical purposes, it only really has access to 4GiB of storage in the stable memory. In the past we demonstrated a proof of concept of BigMap which is a solution to enable an application to scale its storage by sharding its data across multiple canisters.

Based on discussions with external developers and with developers within the DFINITY foundation, we made the following observations:

  • For a lot of applications that developers are currently trying to build, the 4GiB of a single canister is not quite enough. However, the capacity of a single subnet (300 GiB) is sufficient for the time being.
  • BigMap is a good solution to scale storage to the capacity of multiple subnets but is not ready for use in production yet.
  • It also appears that BigMap might be too heavy-handed an approach for scaling to the storage capacity of a single subnet and other mechanisms could be designed which are simpler to build and simpler to use.

Based on the above observations, the goal of this feature is to enable canisters to scale to the capacity of a single subnet by expanding how much stable memory it can store. At a high level, this will be done by offering a stable memory API that enables 64 bit addressing thereby allowing canisters to address 16 Exabytes of stable memory storage (probably more storage than what will ever be available on a single subnet). The feature also involves investigations into and making sure that the current data structures used for managing the stable memory of the canister can scale appropriately when they store a lot more than 4GiB of storage.


I was hoping that canisters would simply have access to more memory through orthogonal persistence. Is there going to be some API that will be required? I think to be of the greatest benefit to developers, the heap of the canister needs to be greatly increased. You should be able to just create data structures and fill them with data, scaling up to the capacity of the subnet.

And in the future…I think it would be great to figure out a way to scale a single canister beyond the bounds of a subnet, but perhaps we can discuss that later.


I believe “new API” is not a developer-facing API, but API between Canisters and the IC so it is transparent to developers and still be “Orthogonal Persistent.”


Yes, those are different efforts. This is to use 64bit memory addressing which can be handled entirely via execution layer.

Increasing beyond subnet would require different skills/components, and still the goal of BigMap proposal.

(Clearly, I need to fill out more of these descriptions :wink: )


That would be amazingly beautiful.


I asked about this in another question, but I’m wondering how this will work practically.

For example, what if I want to use a data structure (like HashMap), but because it’s not a stable type I store the data instead in a stable variable (like Trie). And in the pre-upgrade I convert from Trie to HashMap and in the post-upgrade I convert from HashMap back to Trie for persistent storage.

If stable memory goes up to 300 GB but non-stable memory stays capped at 4 GB, this wouldn’t really work.

As an aside, how is stable memory currently implemented? How can every canister have 300 GB available on a subnet? Where does this extra RAM come from? Do the subnet nodes have enough to begin with?


I think the 4G is the stable memory, this single canister limits caused by the wasm32’s pointer size. The 300G is also the stable memory, this subnet limits caused by mechines, as a server has 3 TB memory, but they need to store the copies:


Currently, the wasm heap is capped at 4GiB so we cannot expand it further. Once wasmtime stabilises support for wasm64, then we could allow heaps to grow beyond 4GiB as well. However, then we also have to worry about how does a canister with a ton of heap upgrades itself (because we currently wipe the heap on upgrades). So supporting wasm64 will open other issues that also need to be addressed.


Extending 64bit stable memory is just first step in the process. After this more work would be needed. At a high level, one idea to use the expanded stable memory with just 4GiB of wasm heap is to do something like the following:

  • When a canister starts executing a message, it will first have to identify where in the stable memory the relevant block of data resides.
  • It will then copy over the relevant block of data from stable memory to heap.
  • It can now operate on the data on the heap.
  • Once done, it can now copy the updated data back to the stable memory.

Identifying where in stable memory the relevant data lies will involve building some sort of index in the wasm heap or a HashMap structure that can operate over the stable memory API. This part is not being actively worked on currently by the team. We expect that this may not necessarily need system support and could be built purely in the application space.


Note that even already now, the Internet Identity canister does precisely that: Keep all relevant data in stable memory, and use the heap only as scratch space. Makes upgrades simpler.

I guess expanding stable memory beyond 4GB means that the Internet Identity service does not need to scale out for space reasons any time soon. (Maybe for throughput.). @kritzcreek will be happy to hear that :slight_smile:


Indeed, that is where we got the inspiration.

I just updated the Summary at the top of this thread with more info, including relevant folks.


Thanks, is there any documents about stable memory and heap of canister? what do the devs need to do to use the different memory style?

Maybe vars used stable keyword will be storaged in stable memory.


The way the two memories are used differs depending on whether you are writing in Rust or Motoko.

CC: @claudio for motoko documentation.

In Rust, you can use the heap as a normal wasm heap and the stable memory via the API as defined in cdk-rs/ at main · dfinity/cdk-rs · GitHub. Note that this will be extended when the 64bit stable memory extension lands.

1 Like

Motoko indeed uses stable memory as the backing store for stable variables.

Before an upgrade, the stable variables are serialized from wasm memory to stable memory. After an upgrade, the stable variables are deserialized from stable memory into wasm memory. Motoko doesn’t read or write to stable memory during execution outside of upgrades.

This solution, though simple, is not ideal as the (de)serialization steps can both run out of cycles for large data structures, preventing upgrade when they do.

We are currently exploring support and an API for (almost) raw access to (32-bit) stable memory that we hope to extend to 64-bit stable memory in the future. The idea to allow both high-level Motoko stable variables and (abstract) stable memory to co-exist in a single canister so that canisters that prefer to maintain their data in stable memory throughout execution (to avoid the problem with exhausting cycles on upgrade) can does so if they wish, at the conceptual cost of using a much lower-level API.


Hi all, a brief update on this feature.

We have had some discussions about what is the best way to introduce new APIs on the IC. We want to expose new APIs to let developers experiment with them and to solicit feedback. It is possible that based on the feedback we get, it turns out that the API needs some adjustments. And if developers have deployed canisters using the API, then making adjustments will be difficult. So we are planning on coming up with a process that allows us introduce APIs in an experimental phase. I’ll make a separate post about the process after some of the ongoing discussions have concluded to solicit feedback on this process.

I imagine that the community will not want to block the release of this feature / API till we have figured out what the best way to do experimental APIs is yet. So, the current plan is that when we release this API, we will document it as being in experimental phase and ask developers not to use it on any critical canisters yet. And if based feedback, some adjustments to the API are needed, then the existing canisters using it will be impacted. We will of course discuss and announce impending changes so that the everybody has an opportunity to upgrade their canisters. We will also have to figure out what to do if someone removes the controller from their canister and then they cannot be upgraded (hopefully no one does that :crossed_fingers:)

I apologise in advance for any difficulties this might cause you. And I hope that you can appreciate that introducing a process to iterate on APIs will allow us all to build a better IC without a lot of cruft. Thank you for understanding.


DFINITY engineer here. In the initial roll out we will limit the stable memory size to 8GB. We will increase it in future release after gathering feedback.


So should increasing the Wasm heap be another roadmap proposal entirely? Increasing stable memory is a good step, I hope that increasing the heap won’t be thought of as less useful or put off for too long. Copying over chunks of stable memory into the heap is still going to be a complication for library authors and developers to deal with, and having a 300 GB heap would be ideal.


@lastmjs : absolutely! Please do not think that we will stop after shipping this feature. This is just a small step in helping developers access tons of storage in an ergonomic way. We absolutely realise that asking developers to copy chunks of memory back and forth from stable memory to heap is complicated, froth with footguns, and also inefficient! We are actively discussing designs in this space and I will keep you guys posted on developments here.


Sounds just swell, thanks!

1 Like