Playing around with these libraries it became apparent that each one has different pros and cons. For example:
There are different encoding schemes for the two external libraries that require different levels of investment to a data structure working
One library (ic-stable-structures) does not allow for nested variables (composite keys are suggested instead)
The standard Rust library can brick a canister by running out of cycles in pre-upgrade hooks. This has possibly improve with DTS but couldn’t verify the exact extent of the problem right now
There’s different performance for each library which is quite slower than the standard library
The ic-stable-structures library is used in a few of Dfinity canisters and OpenChat while I couldn’t find any project on Github that is building on ic-stable-memory. That’s not an issue by itself but there could be different support for one of the two libraries over time
Data migration questions
But there is one thing that is not clear at all at this point - how do migrations work with the two Rust libraries, ic-stable-memory & ic-stable-structures.
The ic-stable-memorysuggest to have an enum with struct V1 and struct V2. The ic-stable-structures does not have a similar question in there docs but there are some hints from @roman-kashitsyn in IC internals: Internet Identity storage that having extra space in a stable structure might allow future extensions.
So, a few questions:
What the standard way to update an ic-stable-structures variable?
Is there a working example of ic-stable-memory’s upgrade mechanism with enums?
Is it an anti-pattern to copy the data to a new memory slot and copy them back to a new structure? My understanding so far is that this is the way SQL systems currently work
What the current state of DTS with larger amounts of heap memory?
Stable structures rely on the Storable trait to load and store data from and into stable memory. The only requirement when updating the schema is for the implementation of Storable to be backward compatible.
Maybe it’s best to walk through this with an example. Can you share a share a specific example of a data schema and the change you’d like to make?
Generally yes, because there are limits on the number of instructions you can execute within a message. It’s currently 20 billion instructions with DTS. Evolving your schema in a backwards-compatible way is the safer and more efficient option in the vast majority of cases.
The example below describes a SQL-like relation we have in our codebase. Hope is not too long or complex and I’m sharing it in it’s entirety since it describes many of the issues we will be facing with migrations and we hope to prepare for accordingly.
Can you share also how you’re implementing Storable? That implementation is really the key to evolving the schema. All you’d need to do is to ensure that your Storable implementation works on both the old schema as well as the new schema.
By the way, I don’t know how big you expect your dataset to be, but if it’s early in the project, I’d consider using the event log pattern as is currently being done with ckBTC. It should give you a lot of flexibility to evolve your schema in the early days, then once it’s established and you need to scale, you can switch to using stable BTreeMaps.