Hey everyone, I’m here with a new Rust library again.
It is called ic-stable-memory and it basically allows you use stable memory as main memory.
With this library you can create stable variables - variables that live directly on stable memory and do not require a common serialization/deserialization routine during canister upgrades. Also, you can use stable collections (there are two of them at the moment: stable vec and stable hashmap) which also live purely in stable memory and are able to hold as many elements as a subnet will allow your canister to allocate.
Remember - this is still version 0.0.1, so it is a bit limited, a bit unoptimized and a bit buggy. But anyway, I encourage you to give it a shot. Also, this is a pretty complex piece of software (at least for me), so any help is also greatly appreciated: PRs, design proposals, bug reports or any other feedback.
Also, I published some articles (to this new cool purely web3 blogging platform called papy.rs) in order to make it easier to understand what this library is and how to use it:
This is what I’m thinking of right now. Such a solution could bring a seamless data certification developer experience, but I need to do some more research before trying to implement it.
There is a thing in Ethereum called Verkle trees. I would like to try to implement it, but I don’t know anything about this type of crypto. If someone could assist me with the theory behind it, it would be awesome.
I will take a look, how are you allocating/delocating the memory? It grows as needed or can you set boundaries? If the variable is deleted it creates a fragmentation or you realocate it accondingly?
One major problem I see is how to allocate and deallocate memory.
yeah, just with regular data structures there’s a large design space of authenticated data structures (That’s the general term for what we’d call certified data structures) with different trade offs. Would be great if someone would have a resource that compares different authenticated data structures. I’m not sure if Verkle trees are the best structure to start with.
There is a segregation free list based memory allocator. When you ask it to allocate a new memory block of some size it will first search it’s free-list, if there is no free block of a requested size, then it will try to find any free-block of bigger size. If there is such a bigger block, it will split it in two. If there is no such bigger block it will try to call stable_grow(1). If the call was successful, it will repeat the previous steps. If the call was unsuccessful, it will return OutOfMemory to which you can respond programmatically (e.g. spawn another canister to scale horizontally).
You can’t set artificial grow boundaries rn, but it seems like a good idea to add such feature. Thanks!
When you delete a variable it frees the memory block. This new free block is added back to the free-list (and possibly joined with it’s free neighbors). So, yes, there is a bit fragmentation, but I need to add some tests in order to understand how much of impact does it provide.
We started with having 4GB just to accommodate the use case of upgrading canisters and I think we are just cautious with increasing it too much because it’s always hard to put back a restriction, and thus far no one is close to using the 8GB.
Can we start lifting the limit? Keeping it artificially low in my case is kind of preventing me from using it in Sudograph. I would like to be able to tell people how much storage they can use in Sudograph, that’s a major limitation to people. I also would love to see a schedule to know when we can expect limits to be lifted. If we can really get into the 100s of GB then this might be an all-in solution for Sudograph and any other library that is willing to embrace stable memory data structures.
Keeping it low might be a self-fulfilling prophecy, as people know they can’t use it for much thus perhaps they don’t.