Increased Canister Smart Contract Memory

Hey Folks!

This project is our guinea pig project for what public roadmap projects will look like, and the next 7 days will have a lot of activity surrounding it.

  1. Today, August 25 - Community Conversation on Increased Canister Storage

  2. Thursday, August 26 - Draft proposal posted on the forum for review. We will post to the developer forum thread on the project a markdown doc (hosted on Github repo for NNS proposals) that explains the project design and intent so people can see what the project is about and how we intend to achieve the desired goal.

  3. Wednesday, Sep 1 - This is the nuanced part: I will submit to the NNS a proposal (without a binary that upgrades the IC) asking the wider IC community to vote on whether the Foundation and wider community should continue working, and ultimately deploy, the work for increasing smart contract canister storage. We are essentially using the NNS’s voting mechanisms to let the community express themselves.

  4. Friday, Sep 3 - NNS Proposal has a 48-hour expiration so the decision will be completed by then.

There will be two possible outcomes from this:

a. Proposal passes - if it passes, then the project team will submit an NNS proposal to actually upgrade the IC. In the case of Increased Canister Storage, this will happen relatively quickly since a lot of the groundwork had been done so the actual changes will be light. This is an exception. Many times in the future, if a proposal to “work on this version of the plan” passes, it may take weeks or months to have an implementation ready.

b. Proposal fails - I think this is very unlikely, but worth considering that this means the community does NOT want this change. In which case, we will not continue to work on this plan, but potentially go back to the drawing board and discuss it further with the community

Please note: in order to let the community truly express themselves, the DFINITY Foundation will vote as late as possible. We know lots of people’s neurons follow the DFINITY Foundation so we want to let the community time to breathe.


Our Community Conversation on Increased Canister Storage with @akhilesh.singhania goes live in a few min:

Join the conversation to be able to participate in our live Q&A :tada:


Will DFINITY or the ICA be “banned” from voting on this proposal? How can we ensure the vote actually represents the will of the community if very large players vote?

Not sure if you saw this in my note above.

1 Like

I don’t want to sound more opinionated or “final” than i actually intend so I think I should write the facts and the intents:

The facts

  • there is no mechanism to ban anyone from voting
  • I think this is a very popular proposal

The Intent

  • the foundation intends to get as much community feedback as possible
  • the foundation intends to Vote last (with the knowledge of what the community voted), but it does intend to vote (as member of community)
  • this is very much an interactive and iterative process so If there are any unintended consequences of this model, we will iterate. Not set in stone. Maybe for more controversial proposals, we can try something different.

As we discussed earlier, we are posting our plan for Increased Canister Storage. You can read our intent, design decisions trade-offs, caveats, and rollout plan.

As Diego mentioned, the current plan is that on Wednesday September 1, Diego will submit an NNS proposal for the community to vote whether the final version of the plan should be implemented.

Please give us feedback, so we can improve and give the IC the best possible plan to increase smart contract canister stable memory.

NNS Proposal: Add 64-bit stable memory API

Main authors: @ulan , @akhilesh.singhania


Currently canisters are effectively limited to 4GiB of storage. This is because stable memory uses the 32-bit addressing scheme and when a canister is upgraded, its Wasm memory is wiped.

A number of applications can benefit from access to additional storage without having to partitioned into multiple canisters. The goal of this proposal is to introduce a stable memory API that allows canisters to address more than 4GiB of memory allowing canisters to have more storage bound [eventually] only by the actual capacity of the subnet.

The goal of this proposal is to increase the amount of memory that canisters can access [eventually] bound only by the actual capacity of the subnet. Since, the Memory64 proposal is not standardized yet and its implementation in Wasmtime is not production ready yet, this proposal enables the increase by introducing a new stable memory API.


When the stable memory was first designed and implemented, the Memory64 proposal was not standardized yet and only recently it was implemented in Wasmtime but the implementation is not production ready yet. Further, the multiple memories proposal is still in Phase 3. With this in mind, stable memory was designed with a 32-bit addressing scheme so that it could eventually be replaced by Wasm native features.

Supporting 64-bit Wasm memory presents a number of performance challenges for the Internet Computer. Currently Wasmtime can efficiently eliminate bounds checks for 32-bit memory accesses by using guard pages. A similar optimization is not possible for 64-bit memory accesses making them more expensive. Besides that more optimization work around memory.grow() needs to be done before the implementation is production ready.

Since bringing the Memory64 support to IC may take a while and some canisters need large memory now, we propose an extension of the stable memory API to 64-bit instead.


We propose to add four new functions to the System API that mirror the existing 32-bit functions:

  • ic0.stable64_write: (offset: i64, src: i64, size: i64) -> ()
    • Copies the Wasm memory region specified by src and size to the stable memory starting at the given offset. Note that this API uses 64-bit addressing for the Wasm memory even though at the moment the Wasm memory only supports 32-bit addressing. This is done to keep the possibility of supporting 64-bit Wasm memory open in the future.
  • ic0.stable64_read: (dst: i64, offset: i64, size: i64) -> ()
    • Copies the stable memory region specified by offset and size to the Wasm memory starting at the given address dst.
  • ic0.stable64_size: () -> (page_count: i64)
    • Returns the number of 64KiB pages in the stable memory as a 64-bit integer. Note that it would be possible for this function to continue to return a 32-bit integer as the 32-bit version of the API does. With the page size of 64KiB, a 32-bit integer could address up to 256 TiB which could be sufficient for a very long time. However, it was felt that there should be a clear distinction between the 32-bit and 64-bit versions of the API and having this API return a 64-bit integer should not have any negative impact.
  • ic0.stable64_grow: (additional_pages: i64) -> (old_page_count: i64)
    • Tries to grow the memory by new_pages many pages containing zeroes. If successful, returns the previous size of the memory (in pages). Otherwise, returns -1.

As the specification repo of the Internet Computer is not open source yet, please see the proposed diff here.

Backwards compatibility

In order to ensure smooth transition and upgrade we allow canisters to use the 32-bit and 64-bit versions interchangeably up to 4GiB. In other words, both versions operate on the same stable memory. As soons as the size of the stable memory grows beyond 4GiB the 32-bit versions cease to work. Calling them will result in a trap.


The main risk is canisters mixing the 32-bit and 64-bit functions after the stable memory grows beyond 4GiB. We somewhat mitigate the risk by ensuring that 32-bit functions will trap in such a case so that the canister execution stops instead of continuing with a wrong result.

Alternatives Considered

  1. One alternative is to introduce a completely new 64-bit stable memory that is disjoint from the existing 32-bit memory. While it is a cleaner design, it would complicate canister upgrades because canisters would need to copy the existing state from the 32-bit stable memory to 64-bit stable memory.
  2. Another alternative we considered was to say that once a canister uses the 64-bit version of the API then using any of the 32-bit API will result in a trap. We felt that this would allow canisters to more easily detect problems in switching between APIs however we felt that this may overly complicate the implementation.
  3. Another alternative is to wait until the Memory64 and Multiple Memories proposals are production ready. Then the stable memory can be represented as one of the multiple 64-bit memories removing the need for the new System API.

Rollout plan

When introducing a new functionality to any production environment, there are two types of risks that should be managed:

  • Regardless of all the testing done before the rollout, there is still a risk of the new functionality introducing some bugs in production.
  • Regardless of all the initial feedback gathered about the new API, using it in production may reveal some shortcomings requiring adjustments to the API impacting the canisters that already depend on the API.

Hence, the rollout plan for this proposal would be the following:

  • Even though the new API allows canisters to address the entire capacity of the subnet, the stable memory of a given canister will initially be capped at 8GiB.
  • The API will be marked as experimental and use of the API in essential canisters will be strongly discouraged. This way the community can feel encouraged to make and accept future proposals to break existing canisters using the API in order to not support deprecated APIs thereby keeping the API as clean and easy to understand as possible.
  • After we gain confidence in the API and in its implementation, over subsequent NNS proposals, we will gradually increase the size of the stable memory and mark it as no longer being experimental.

There was a question about how we will test this feature before we deploy it to production. I will write a small text here about how we do testing in general. I will also ask the testing team to write a more detailed post to provide more details.

  • As we use Rust, we rely heavily on its unit testing framework for writing unit tests
  • Then we have a e2e integration testing framework where we can bring up a simplified version of the Internet Computer on a single machine and validate a number of properties.
  • Next we have the reference implementation of the Internet Computer which is used to validate a number of additional properties.
  • Next we have a number of testnets where we deploy fully functioning Internet Computers to further test in an environment as close to production as possible.

We have a CI / CD pipeline that runs various tests using the above mechanisms. We run some tests on each PR as it is merged and we run some tests on an hourly basis, some on a nightly basis, etc.


Thanks! All sensible and well done. I endorse this proposal! (I should probably get a neuron that people can follow if they like my endorsements.)

I think an even better argument for returning i64 here is that the memory.size Wasm instruction will return an i64 in the memory64 extension, and the stable memory API should stay close to the wasm instructions it stands in for.

I like the word “yet” in this sentence.

Your prose sounds as if accessing the first 4GB will trap if the memory is large than 4GB, which I think is good. But the diff adds the check

if offset+size > 2^32 then Trap

This probably should say

if |es.wasm_state.stable_mem| > 2^32 then Trap

in stable_read, stable_write and stable_size.

I expect soon there will be a demand for a way for canisters to find out how much memory they can use (besides trying). But I don’t mind that not being part of the API, as there is no such functionalities for real WebAssembly memories, and sticking close to the native Wasm experience is probably useful.


Thank you for providing this information. I’d really like to learn more about the testnet environment. Hopefully there will be a lot of detail on this topic in the test team’s post.

1 Like

Foundation actually has a dedicated testing team so we have asked them to help document and write more about this for folks. thanks for the feedback!


pack it up, we are good to ship then :wink:

You totally should!


Oh, and more more issue with the proposed spec: the code for stable_grow must return -1 if the new size would be above 2^32 bytes. The current code would allow a 32 bit API using canister to grow the canister above 4GB (and then start trapping in every call).

1 Like

In all seriousness, thank you for taking the time to read in earnest, Joachim


@ulan @akhilesh.singhania


Thanks @akhilesh.singhania for your detailed technical explanation of this change. Thanks @diegop for your follow-through and involving different teams. I do think that proposal adds significant value to the development of IC; with the increased canister storage.

I agree, in part, with @lastmjs on the voting issue with Dfinity Foundation. i.e. I believe that Dfinity Foundation should abstain from voting (even if the proposal fails); even if there is no mechanism to ban Dfinity Foundation from voting.

Democracy, even liquid democracy, assumes (nay requires) that all voting participants are equally informed about all issues surrounding the topic that they are interested in voting on. In software development, it is NOT sufficient to just to show a diff to the voters who do not have access to the source code versus others (in Dfinity Foundation and others bound by NDAs) who do. In asking for such a vote, we are inadvertently creating different classes of voters; one class that is not completely informed (unless the source code is open sourced) and one class that is informed through access to source code. By not providing access to Unit and Integration Test cases, we are once again creating a divide. We are saying to the folks that do not have access, please trust the ones that do have access in their testing.

I disagree, in part, with @lastmjs in context of how bigger players have bigger say in the voting. This is the underlying structure that I believe that I agreed to when I decided to participate in the IC eco-system. I understood my neuron may not have the same voting power as someone else’s neuron. I am ok with it.

The notion of abstinence in voting should certainly not alien; especially since the source code is expected to be open-sourced at some time.


Regarding alternative 1: isn’t this fairly easy to route around via an upgrade? Upgrade a 32 bit canister to include export routines to move data to a trusted backup canister, upgrade to a 64 bit canister with an empty memory that has rehydrate functions to pull data back. Then upgrade and remove the hydration functions?

If it is cleaner it may be worth making existing 32 bit canister jump through this while we only have a relatively small number.

I’m all for the upgrade, but think we should run out all the options and beat them up. We don’t want to generate a bunch of needless complexity.

These are all very reasonable points, and very compelling ones!

I am a big believer in “laying out the Legos on the table and seeing what we can build together.”

With that in mind, there is an additional lego I want to lay out on the the well-reasoned table you’ve set:

I think there is a silent group of people who TRUST and want the foundation to make technical choices (they make this choice by following the neurons). For the foundation to NOT vote would be to rob this group of some weight or say.

Are there people who follow the foundation out of sheer default? Very likely. But I also think there are many earnest folks who express their will through following. I have been thinking a lot about this group and how to make sure they are also heard.

I don’t have all the answers, but I wanted to lay out all the Legos on the table…


Thanks, @diegop for your Lego Block. My concern, per se, is not about liquid democracy. In fact all four of my neurons follow the foundation and therefore I trust the foundation to make the correct decision on my behalf. As a matter of fact, I don’t have the time to go through the code that I am asking to be shown. Again I signed up for liquid democracy when I decided to invest in the IC eco-system. I am ok with it.

However in our fledgling liquid democracy experiment, it is important to call out issues that need to be called out for others as well. A potential future resolution could be to NOT bring a proposal prior to all the source code being viewable ; if approval is required on the code itself.

I anticipate that this issue, while fairly innocuous now, may become quite contentious in the Threshold EDSCA Signature case. This is because Dfinity Foundation might WANT to hang on to the source code and the papers in private; in the belief that this gives IC a leg up visa-ve competitors. But then how does one get the community to vote on it as an informed citizenry; especially the community will need to trust this code for IC-BTC integration?


This would be awesome!

Well, for the way Motoko uses stable memory now, e.g. as the backend for the fully automatically managed stable variables, using the new API is pretty simple, but also not very useful, as you still need the stable data in main memory between upgrades, so still bound by 4GB.

Changing the implementation of stable variables to, say, load them on demand would allow Motoko to hold more data and have many other benefits (e.g. no out-of-cycles worries when upgrading), but would be a bigger undertaking.

A middle ground is giving developers mostly raw access to stable memory. @claudio is working on that (design doc). 32 vs. 64 won’t make a big difference here, but it’s still WIP.