Update: we’ve released 0.6.0-beta.2. This release now makes BTreeMap V2 the default implementation, and has the API that you’d expect in the upcoming production release.
We have been running fuzz tests for over a week and haven’t discovered any issues, so we do expect to do a production release soon. In the meantime, please provide us with any feedback.
And a special thanks to @witter and @b3hr4d for their code contributions.
@peterparker0.6.0-beta.2 isn’t backward-compatible with 0.6.0-beta.1 (I didn’t mention that because I didn’t announce 0.6.0-beta.1 on this thread), so a migration is not possible unfortunately.
Gotcha, thanks Islam. So I tried 0.6.0-beta.2 in a brand new canister, everything went fine including my above test of uploading a 10mb file in an unbounded treemap.
It is unfortunately not possible to do so as there are assumptions in the way memory is managed in stable structures that assumes that this number doesn’t grow. An alternative would be to store these extra fields in an additional structure. If you’re using BTreeMap, then consider also making that type Unbounded to give yourself flexibility.
@zohaib29 Actually, an even easier solution would be to change the type of the value in your current map to be Unbounded, and then you can add an additional field without any problems. I totally forgot that we support moving from Bounded types to Unbounded types - thanks for reminding me @dragoljub_duric.
Thanks!
So we don’t need to worry about memory management when using Unbounded types?
What happens when we remove the property from the struct with max size?
We’ve updated BTreeMap in such a way that you can remove the bound of a struct entirely to be Unbounded, so it should work out of the box (and you should be able to write a simple unit test to verify that).
Yes, it is possible to upgrade from older versions of BTreeMap to 0.6.x and change the implementation of Storable to be Unbounded for the keys and/or values.
I’m curious about the implication of this in the stable_read and stable_write linked functions. I’d expect that data should be stored in a similar format to SQL pages for efficiently querying multiple data (pages) at the same time.
But, if there’s no bounded size for stable structures then data won’t be able to be grouped into pages and hence not have the ability to be queried in parallel.
I’m I missing something?
PS: Loosely related to this question from a year ago.
In order to support unbounded sizes in the stable BTreeMap, we still use fixed-size pages, but pages can “overflow”. As in, if an entry were to be inserted with a size greater than the capacity available in the page, a new page is allocated and its address is stored in the previous page.
In other words, BTreeMap nodes now become a linked list of pages, as opposed to a single page. Databases like Sqlite conceptually do something similar, though its implementation differs substantially.
The page size depends on the data being stored. If the data being stored is bounded, then we use a heuristic based on the bounds. Otherwise, a default of 1024 bytes is used. See here.
The closest thing we have would the benchmarks in our CI. There you can see stats on the number of instructions used by entries of various sizes.
Comparison-like documentation can be quite a time sink and can often be wrong and/or out of date. If there are specific questions that you think our documentation is lacking, please share it and we can address it.
I can’t find a straightforward/updated answer to this. Do we know how builders should think about storage capacity when deciding to use stable structures in production?
For example, what if an app (e.g. taggr/hotornot) reaches the canister limit on the Stable bTreeMap that stores user posts (4GiB?). Can it be migrated/copied to a multi-canister architecture while preserving the origingal bTree? What’s the upper limit on storage like this?
And thank you. If it wasn’t for this library I couldn’t build anything here.
I’m not sure I understand the question. A stable structure can reach hundreds of GiBs in size - they aren’t constrained by the 4GiB limit that is currently associated with storing data on the heap. Moving to a multiple canister architecture is in principal possible with a data migration, but the specifics of that migration would largely depend on the dapp’s internals.