Motoko stable record upgrades

There is something that’s not working as expected when upgrading stable variables.
The first example has no ‘var’ and upgrades as expected (Like when receiving a request that has missing opt fields). When the Candid decoder can’t find a value it puts null in the first opt it can find while going down the tree.

[Motoko Playground - DFINITY]

But if we put ‘var’ in front of properties it doesn’t work like that anymore and instead deletes the whole stable object.
[Motoko Playground - DFINITY]

Can we get it to work the same way with or without ‘var’ @claudio ?

Here is the pattern I was trying to get to work:
[Motoko Playground - DFINITY]

It is similar to Class+ [Writing Motoko stable libraries]
Except it’s not for libraries but making a canister more modular.
Where each module is given the same object, but it picks from it what it knows and needs to work with. Each module memory can be upgraded by the module once it adds opt fields. Also more modules can be added by adding their opt var memory types.

I’ve only had time to look at your first two examples but both of those are examples where
the upgrade is not deemed compatible by the Motoko type system (this is why you get a warning about the upgrades in dfx/playground).

The only reason the upgrades work at all is because they rely on the old implementation of stable variables that saves them in and extended candid format and can insert (some) missing fields during upgrade because it copies all the stable variable data to stable memory and can transform it when copying back into main memory. While useful, that strategy doesn’t not scale to large or deeply nested data because of limitations of the (extended) candid format.

Going forward, with enhanced orthogonal persistence (EOP), the data will not be copied and there is no opportunity to do field insertion etc. However, avoiding this copying step is one reason EOP scales to large data sets.

It is for this reason that Motoko statically rejects upgrades that fundamentally change the data format, even though this is supported by the old Candid mechanism and sometimes “works” as desired. We wanted to allow ourselves room for a different upgrade mechanism that only allows more limited upgrades but with much better scalability.

TL;DR your uggrades are ignoring compiler warnings/errors and exploiting an implementation detail that won’t be available in the future. You shouldn’t use it but if you do, don’t expect it to continue working as we improve Motoko (e.g. with EOP).

2 Likes

Thanks for explaining (again). Adding to the record would solve most of the upgrade problems, so I was hoping this ‘hack’ would work, but I suppose special patching is needed when doing this.

With EOP, is there a new way of doing these upgrades or do we have to use a patching pattern like GitHub - ZhenyaUsenko/motoko-migrations: Sample project structure to implement migrations in motoko

Going to figure something out, got a few ideas.

1 Like

How about this way of upgrading:
[Motoko Playground - DFINITY]

We start with an actor like this and call ‘inc’ a few times:

Then we change to this:

The files MemOne, V2, V3 are just there to make the example easier and show how the file changes.
MemOneV2 looks like this

It upgrades the record whenever we use it.

But we still have incompatible stable signature, because of the new variants. Is that still a problem? I remember you said it was okay before in Zhenya’s pattern, but perhaps EOP is changing that too?

If these new variants are also not getting copied, then we can only upgrade root-level actor variables, by changing their names. We also have to upgrade the whole memory all at once inside the body and if it’s large, perhaps it will run out of instructions. Nested objects - like OOP upgrades will be harder probably too.
Using ICRC3 like GenericValues like [(#Text(key), #Nat(val))] is a solution, but its memory footprint will be quite large, access performance slower and it loses the type checking.
Will be good to know how you plan on making it work.

[Motoko Playground - DFINITY]


If this is the only way of doing it, then having a bit more complicated structure with nested modules by different vendors and different versions is going to be nearly impossible to upgrade.

Came up with this solution https://github.com/Neutrinomic/mosup
It will probably be best if a script automatically generates the entry actor file.
Its goals will be :

  • Every module has the same abilities like when it’s occupying the actor
  • Every module specifies how its memory gets upgraded with strictly defined file structure and type names.
  • Executes memory upgrade paths automatically per module
  • The Dev and none of the modules directly write inside the actor entry file. It’s auto generated
  • Modules can use each other

I haven’t been able to do a deep dive, but is this implying that the migration pattern form ZhenyaUsenko is no longer going to work with OEP?

By the looks of it yes (Still waiting for Claudio’s confirmation)


It’s showing Incompatible stable signature when deploying, because of the new variants.

JFC…that may be it for me. EVERYTHING I have integrates that pattern. I just spent the last week adding utilities to simplify it and remove almost all the boilerplate for the user.
Animation Halloween GIF by The Veils

2 Likes

I think the MoUP idea will work. Still refining it. Devs write their actors like this:


Putting as many modules as they want and interconnecting them.
Example - ICRC3 module taking care of icrc3 while the other modules just write to it. When ICRC3 module needs memory upgrade, devs won’t need to do anything.

Modules look like this:

When the MoUP macro runs before the compiler it produces another file. Replacing and adding a bunch of things. Typecheck works all the time, even with the pre-processed actor.

MoUP was taken in npm, so it was renamed to MoSUP : (Mo)toko (S)table memory (UP)grade system

I made a prototype tool that gets this pre-processing done. Demo here:

There is a mops package and an npm package.
Unless the Motoko team @claudio has another solution that doesn’t require meta programming for automation, we will use that.
To use it, a module developer has to structure their files exactly as required.
Then
deno run npm:mosup src/entry.mu.mo
will run the pre-processor without giving it any unsecure access to files and networking.

In one of the previous Motoko developer working groups I believe @luc-blaeser was coming up with an easy migration pattern integrated at the language level that would run on upgrade, and rollback if a trap occurred.

So I think this is being considered/worked on?

I asked him if there was a downgrade abstraction as well (allowing you to downgrade the migration if there’s a bug in new code that uses it), so not sure if they’re considering that or not.

1 Like

I don’t think explicit ‘downgrades’ are needed, since we can downgrade with an upgrade patch or use canister backups.

It will be good to know what’s planned better sooner than later, a lot of architectural decisions depend on that.

1 Like

I can confirm that the upgrade to new enhanced orthogonal persistence will automatically migrate all your stable variables. Moreover, the runtime system is strict with regard to memory compatibility check, rolling back if types are not compatible without losing state. As mentioned, a downgrade to classical stable variable persistence is not foreseen (but technically, would even be possible to realize but requires engineering time and not sure whether that is really needed).
In general, I recommend to carefully consider the dfx warning and errors for Motoko upgrade. For example, dropping stable variables is now possible but a dfx warning is issued and requires explicit consent.
Please let me know if I can provide more information.

1 Like

Thanks for hopping in.
Ok, classical stable variables → EOP stable variables :white_check_mark:
How about
EOP stable variables → EOP stable variables
which don’t pass the compatibility check. Like for example, when a new field is added to a record or new variant is added. They will just roll back if not compatible?

Thanks!
Stable variable upgrades with EOP is also working and supporting any kind of migration:

  • Implicit migrations like promoting to super-type, adding/dropping actor are automatically supported by the runtime system. This is what is checked by the runtime system.
  • Explicit migrations are needed for any more complex data type changes, including adding record fields. A simple example can be found in https://internetcomputer.org/docs/current/motoko/main/canister-maintenance/compatibility#explicit-migration. The idea is to keep the old declarations, introduce a new variable with the desired type and then copy data on upgrade (in the actor initialization sequence, no need for a post-upgrade system function). After successful custom migration, the old variables can be dropped.

There is also a recent paper that gives more detailed insight in EOP and the upgrade techniques: Smarter Contract Upgrades with Orthogonal Persistence | Proceedings of the 16th ACM SIGPLAN International Workshop on Virtual Machines and Intermediate Languages

I am happy to provide more information.

5 Likes

I have opened a beer and poured it out for you sir.

1 Like

It’s all good. I had planned to write some lazy upgrade libraries anyway, but I hadn’t bet on having to go back and refactor every motoko library I’ve written in the last two years. It is just going to slow me down…and the current stuff will all still work on 32 bit which will be fine for 90% of use cases.

…save the beer so we can drink it together. I will need it.

1 Like

Oh dear, sorry I’m so late to the party.

If the @ZhenyaUsenko pattern worked before EOP, without ignoring an incompatibility warning, then it should still continue to work just fine, even with EOP. We have not changed the abstraction, only changed its implementation, so everything that was (legally) permitted before should still work, just better and more reliably.

The pattern that @infu described and is statically rejected with an incompatibility warning is slightly different from the @ZhenyaUsenko pattern since it is adding a variant under a nested mutable field of an object. That is not allowed, and never was allowed. Nested mutable fields behave invariantly with regard to upgrade compatibility. @ZhenyaUsenko’s pattern, on the other hand, is adding a variant to the type of a stable field (which is just fine).

This is legal:

vn

stable var v: {#a} = #a

vn+1

stable var v : {#a; #b} = #b

This is not:

vn

stable var r = { var v : {#a} = #a }

vn+1

stable var r = { var v : {#a;#b} = #b }

@skilesare if your upgrades never involved ignoring compatibility warnings, your code should continue to work just fine.

Here’s an example for the playground:

For EOP stable variables → EOP stable variables which don’t pass the compatibility check, the EOP runtime actually does an additional dynamic check and will indeed trap and rollback if the stable variable types are incompatible. This is actually much safer than allowing the user to ignore the static check which can lead to data loss.