Ic-stable-memory rust library

Hey everyone, I’m here with a new Rust library again.

It is called ic-stable-memory and it basically allows you use stable memory as main memory.

With this library you can create stable variables - variables that live directly on stable memory and do not require a common serialization/deserialization routine during canister upgrades. Also, you can use stable collections (there are two of them at the moment: stable vec and stable hashmap) which also live purely in stable memory and are able to hold as many elements as a subnet will allow your canister to allocate.

Remember - this is still version 0.0.1, so it is a bit limited, a bit unoptimized and a bit buggy. But anyway, I encourage you to give it a shot. Also, this is a pretty complex piece of software (at least for me), so any help is also greatly appreciated: PRs, design proposals, bug reports or any other feedback.

Also, I published some articles (to this new cool purely web3 blogging platform called papy.rs) in order to make it easier to understand what this library is and how to use it:

Hope you’ll like it.
Have a nice day!


Fantastic work. I will consider using this in our token standard, looks to be exactly what we need.


Well Done! I was lokking for something like this and ended adapting the BtreeMap implemented by dfinity team to use the stable memory.:GitHub - victorcastro89/Ic-stable-storage: Rust Stable Storage Implementation,
Original repo: ic/rs/stable-structures/src at master · dfinity/ic · GitHub


Thanks! Feel free to reach out.

Thank you!
Your work is also looks really good. I tried to implement a BTreeMap, but found it kinda tricky at the moment an gave up.

Do you think it is possible to somehow integrate the code you have into my library?

Thanks a lot for taking the time to write those posts in addition to the implementation. I find this very helpful.

Would it be practical to implement certified data structures on top of this as well?

B+ tree would be more useful than a Btree IMO.

1 Like


This is what I’m thinking of right now. Such a solution could bring a seamless data certification developer experience, but I need to do some more research before trying to implement it.

There is a thing in Ethereum called Verkle trees. I would like to try to implement it, but I don’t know anything about this type of crypto. If someone could assist me with the theory behind it, it would be awesome.

I will take a look, how are you allocating/delocating the memory? It grows as needed or can you set boundaries? If the variable is deleted it creates a fragmentation or you realocate it accondingly?
One major problem I see is how to allocate and deallocate memory.

1 Like

And many thanks for sharing this piece of code and the blog posts as well!

1 Like


yeah, just with regular data structures there’s a large design space of authenticated data structures (That’s the general term for what we’d call certified data structures) with different trade offs. Would be great if someone would have a resource that compares different authenticated data structures. I’m not sure if Verkle trees are the best structure to start with.

1 Like

There is a segregation free list based memory allocator. When you ask it to allocate a new memory block of some size it will first search it’s free-list, if there is no free block of a requested size, then it will try to find any free-block of bigger size. If there is such a bigger block, it will split it in two. If there is no such bigger block it will try to call stable_grow(1). If the call was successful, it will repeat the previous steps. If the call was unsuccessful, it will return OutOfMemory to which you can respond programmatically (e.g. spawn another canister to scale horizontally).

You can’t set artificial grow boundaries rn, but it seems like a good idea to add such feature. Thanks!

When you delete a variable it frees the memory block. This new free block is added back to the free-list (and possibly joined with it’s free neighbors). So, yes, there is a bit fragmentation, but I need to add some tests in order to understand how much of impact does it provide.

1 Like

Can someone explain to me and ideally point to documentation explaining the current limits of stable memory?

I was under the impression that stable memory was still limited to 8gb per canister.

I assume this might just be an artificial limit by Motoko? Does the Rust CDK not also impose this limit? And where can I go to see how much stable memory a subnet has available to all canisters?

1 Like

You’re correct. I don’t think it’s in the docs, but it is in the code.

Same case for the subnet capacity.

Do you know the purpose of this 8gb limit?
Now It looks like my title is overpromising.

UPD: updated the title and articles

Following up: When I saw this thread yesterday, I realized the 8GB capacity is too hard to find in the docs, so I am making a PR to update the docs.


We started with having 4GB just to accommodate the use case of upgrading canisters and I think we are just cautious with increasing it too much because it’s always hard to put back a restriction, and thus far no one is close to using the 8GB.

1 Like

I think this previous update from the DFINITY team working on execution (Wasm, memory) would help give context:


Can we start lifting the limit? Keeping it artificially low in my case is kind of preventing me from using it in Sudograph. I would like to be able to tell people how much storage they can use in Sudograph, that’s a major limitation to people. I also would love to see a schedule to know when we can expect limits to be lifted. If we can really get into the 100s of GB then this might be an all-in solution for Sudograph and any other library that is willing to embrace stable memory data structures.

Keeping it low might be a self-fulfilling prophecy, as people know they can’t use it for much thus perhaps they don’t.


ic-stable-memory promoted to v0.1.0!


  • new safer & easier stable variables API
  • SHashSet collection
  • SBinaryHeap collection
  • bug fixes