Completed: - Bounty 24 - StableBtree Mokoko - up to $10k

Maybe try importing the library and using it on the Motoko playground if you think it’s an issue with the local replica and want to reproduce the behavior.


Small update, concerning the limitations observed with the memory manager, the same things happen on the main net. I didn’t spend more time investigating because:

After discussing with @kentosugama, it seems that dfinity is or will be working on unifying a memory manager implementation to be able to use different custom stable data structures simultaneously (maybe it will be through a change of the ExperimentalStableMemory api). I don’t have much more information about it for now.

Also the merge request to put the BTree in the standard library is still in draft


This is really cool! Having a dedicated memory manager in the base module would make it easier to create stable memory collections.

I’m shifting my focus to this now myself, after coordinating with @kentosugama earlier today. I am only now catching up on this thread; I apologize for being late to the discussion.

My first impressions are that this BTree implementation in Motoko will be ideal for testing the more modular version of the (currently experimental) stable memory API, which I’ll explain briefly here. I apologize for revisiting what’s already discussed above.

In the current API, there is conceptually a single monolithic array with low-level read and write operations, and a procedure to grow the array into a larger one. Without more work, this only supports a single (monolithic) data structure.

As a first implementation to generalize this, I propose offering a class in Motoko that comes with the same API as the current (experimental) module.
Each constructor call would create a distinct (initially-empty) “memory instance”, corresponding to a distinct growable array.

Under the hood, the class would use a dedicated allocator for stable memory to manage the monolithic space and give the abstraction of these distinct arrays, rather than a single shared one (still the reality at that lowest level). As a first implementation of the allocator, we could reimplement this Rust implementation of the idea entirely in Motoko, using the currently-experimental API, with the intention of eventually hiding it entirely behind this new (non-experimental) class.

There will still be the same caveats as today: The experimental API is very low-level, and it’s still possible to clobber yourself within an array. The benefit of the new class would be that two (or more) stable structures would:

  • coexist without clobbering each other
  • not consume any space as “reserve” (no padding space is needed)

Looking more closely at the existing implementation in Motoko by @sardariuss it seems that this route is exactly what’s already done here.

And it seems to give a big slowdown, based on reading comments above and in the repo itself.

Perhaps the next step here is to look more carefully at this implementation, with an eye towards improving the performance, if possible. Using an RBTree from base for the buckets is conceptually clean but could be less than optimal in terms of performance, for instance.


Update: After more discussion today with my colleagues, it seems like there’s a strong interest in pursuing the inclusion of this stable memory allocator feature within the Motoko runtime system itself, to leverage the leaner, lower-level performance available there. I’ll be shifting focus to that now, with more updates to follow.


Thanks for the updates.

Now that I think about it I should have compared the performance of the motoko implementation with the Rust’s one. Also I’m curious to see what kind of improvements you will do (e.g. for the RBTree).

1 Like

This PR gives some initial thoughts. Specifically, the design proposal in the markdown file reflects some ideas from last week that are still evolving this week as we continue to discuss tradeoffs of different approaches.

While we want to start with something similar to the Rust version, we are also looking towards generalizations of it, though that will come later.

The main generalization of interest is supporting GC-based reclamation of “stable regions” for reuse by others. That consideration is affecting some of our current thinking, since we want to support it eventually, somehow.


The design/ document in the PR is now “stable” (no pun intended).

I’ve moved on to trying to implement the primitives described there.

In the meantime, we can discuss the design and the way it could be used for building things like the StableBtree described in this thread. If anyone sees issues or has questions, please let us know.

1 Like

@sardariuss First of all, I would like to thank you for a VERY cool job!

The only reason I couldn’t implement this in my project is using Nat8, my data takes up too much space in stable memory. I found this branch and at first glance this is exactly what I would like. How reasonable is it to use this branch to import into a project? Or are you planning to add a toggle option for storing to Blob in the main branch? As I understand from the discussion, after the implementation of Region in Motoko, your library will also change and then it is not clear what prospects await the branch that uses Blob.

1 Like

Hey @rabbithole, thanks for the kind words.

Concerning this lib, I do not advise using it for the following reasons:

  • It has a good test coverage but still, I don’t think much people are using it (which is a bit of a red flag to me). And I know @skilesare had an issue once with initializing a btree with nodes of size of 1024kb, I don’t know if he gave up using the lib or not.
  • The usage is a bit hard: right now you need to keep memoryIds of the memory chunks allocated for your btrees. Also if you use it as is, you’re gonna be bound to use the MemoryManager that comes with it, so if in a potential next version the memory Region are used instead, you won’t be able to upgrade your btrees using the memoryManager to the one using the regions.
  • There’s probably a better alternative out there, the stable map version from @ZhenyaUsenko : GitHub - ZhenyaUsenko/motoko-hash-map: Stable hash maps for Motoko but I’m not quite sure if it fits your need. I think at least in theory btree shall work better to store big blobs? I haven’t done any benchmark myself to compare the performance between the stable btree and this motoko-hash-map for example, so I couldn’t say.
  • At the end I don’t have the time right now to work on it to use the new memory regions myself. But it would be nice to have it implemented someday indeed.

Concerning the changes on the branch you’re pointing to they were meant to increase the speed at which the functions are executed (by avoiding uncessary conversions between blob and [nat8] done when reading/writing in stable memory). I don’t think it reduces much the size of the data stored in the btree. I might be wrong, I don’t know what is the size of a blob compared to a [Nat8] in memory.

FWIW, I am adapting this code to use the Region system in the latest version of moc (0.10.0) and forthcoming in DFX, hopefully very soon.

See the ongoing PR for details.


I’ve merged the changes I’ve done to use the Region instead of the ExperimentalStableMemory in the trunk. I made a few changes in the interfaces of the lib, I was inspired by the Map from @ZhenyaUsenko. The intended usage:

  import BTree "mo:StableBTree/BTree"

  // Arbitrary use of (Nat32, Text) for (key, value) types
  let n32conv = BTree.n32conv;
  let tconv = BTree.tconv(64); // Max 16 characters
  stable let _btree =<Nat32, Text>(n32conv, tconv);

  let old = BTree.put(_btree, n32conv, 0, tconv, "hello");
  let new = BTree.get(_btree, n32conv, 0, tconv);
  let size = BTree.size(_btree);

  assert Option.isNull(old);
  assert Option.isSome(new);
  assert size == 1;

Where the types are as follow:

type BTree<K, V> = {
    region: Region;
    key_nonce: K;
    value_nonce: V;

type BytesConverter<T> = {
    from_bytes: (Blob) -> T;
    to_bytes: (T) -> Blob;
    max_size: Nat32;
    nonce: T;

The “nonces” (shall probably be renamed) are only useful to keep the initial template parameters, so that when a BTree is loaded from stable memory later, the compilation prevents loading it with different template parameters than the original one. It’s kind of a trick, but I don’t know how else to do to support template parameters otherwise.
The max_size is used at the initialization of the btree and directly stored in stable memory. If you every give a greater max size later it will have no effect, and inserting a key/value with a greater size than the original one will still trap.
If you guys have better ideas please share.

I’d like to create benchmarks for the lib before releasing it on mops. @ZhenyaUsenko I was wondering how did you come up with your performance metrics for your stable map?


That’s what I use to track performance

public query func testPerf(): async [Text] {
    let map =<Nat32, Nat32>();

    let startSpace = Prim.rts_heap_size();

    let setCost = IC.countInstructions(func() {
      var i = 0:Nat32;

      while (i != 100000) { Map.set(map, n32hash, i, i); i +%= 1 };

    let setSpace = Prim.rts_heap_size() - startSpace:Nat;

    let getCost = IC.countInstructions(func() {
      var i = 0:Nat32;

      while (i != 100000) { ignore Map.get(map, n32hash, i); i +%= 1 };

    let getSpace = Prim.rts_heap_size() - startSpace:Nat - setSpace:Nat;

    let updateCost = IC.countInstructions(func() {
      var i = 0:Nat32;

      while (i != 100000) { Map.set(map, n32hash, i, i); i +%= 1 };

    let updateSpace = Prim.rts_heap_size() - startSpace:Nat - setSpace:Nat - getSpace:Nat;

    let deleteCost = IC.countInstructions(func() {
      var i = 0:Nat32;

      while (i != 100000) { Map.delete(map, n32hash, i); i +%= 1 };

    let deleteSpace = Prim.rts_heap_size() - startSpace:Nat - setSpace:Nat - getSpace:Nat - updateSpace:Nat;

    var i = 0:Nat32;

    while (i != 100000) { Map.set(map, n32hash, i, i); i +%= 1 };

    let deleteDescCost = IC.countInstructions(func() {
      var i = 100000:Nat32;

      while (i != 0) { i -%= 1; Map.delete(map, n32hash, i) };

    return [
      "set cost - " # debug_show(setCost),
      "get cost - " # debug_show(getCost),
      "update cost - " # debug_show(updateCost),
      "delete cost - " # debug_show(deleteCost),
      "delete desc cost - " # debug_show(deleteDescCost),
      "set space - " # debug_show(setSpace),
      "get space - " # debug_show(getSpace),
      "update space - " # debug_show(updateSpace),
      "delete space - " # debug_show(deleteSpace),

Just to understand, what exactly is the region used for? Is the btree operated on while it is stored in a region? Or does it live on the heap and then is serialized to a region?

And how does it compare to the “changes” mentioned here?

The Region gives an isolated version of the old “experimental” stable memory API. The Region can grow independently of other regions, which have their own “address space”.

The rest of the logic of the B Tree closely mimics the Rust version of the same thing. Both mutate the B tree in stable memory, and do their own customized “memory allocation” to reuse space of old tree nodes when allocating new ones.

Or does it live on the heap and then is serialized to a region?

It does not do this.

Speaking of which, on Wed at Global R&D, @chenyan presented some performance data comparing these structures across two dimensions:

  • Rust vs Motoko
  • Heap vs Stable Memory

Maybe he can paste that table of data here.