Question on updating data structures of stable variables in Motoko

I’ve looked through the posts on this forum where ppl had similar issues, but none of which seem to have a complete explanation on what the process is for updating the data structure of a stable variable. None of them start at the beginning. and the documentation is confusing for me. So would someone be willing to help me with this task.

I have a stable variable that I’m trying to update. That stable variable is defined like so:

    private stable var journal : Trie.Trie<Nat, JournalEntry> = Trie.empty();

the JournalEntry type is defined as follows:

type JournalEntry = {
        entryTitle: Text;
        text: Text;
        location: Text;
        date: Text;
        lockTime: Int;
        unlockTime: Int;
        sent: Bool;
        emailOne: Text;
        emailTwo: Text;
        emailThree: Text;
        file1MetaData: {
            fileName: Text;
            lastModified: Int;
            fileType: Text;
        };
        file2MetaData: {
            fileName: Text;
            lastModified: Int;
            fileType: Text;
        };
    }; 

I mean to add a read field of type Bool to the JournalEntry type data structure.

I haven’t much of an idea as to how to go about doing this. I know that the preupgrade and postupgrade hooks have something to do with this and thats about as far as my knowledge spans on this topic. Would someone be so kind as to explain the process that gets me from my current data structure to the desired data structure where the read field is included? An example of how the preupgrade and postupgrade methods should be defined would be great as well!

thanks in advance!

6 Likes

So according to this, you’re allowed to add stable variables, but you can only update an existing stable variable type to one that is a supertype of the original type.

In your case, you’re trying to do the latter―update an existing JournalEntry-typed stable variable. Unfortunately, an object with an additional field is a subtype of the original object and NOT a supertype, so if you try adding read and running dfx deploy you should get a stable compatibility error.

I think you may need to create a new stable variable of type NewJournalEntry (with the read field) and migrate the old stable variable contents to the new stable variable.

Here’s a sketch of what I’d try:

  1. Create a new stable variable of type NewJournalEntry
  2. Add logic in postupgrade that manually constructs the NewJournalEntry stable variable from the existing JournalEntry stable variable
  3. Run dfx deploy (no stable compatibility errors should occur)
  4. Remove the old JournalEntry stable variable
  5. Remove the postupgrade logic you added in step 2
  6. Run dfx deploy (you’ll get an error for deleting a stable variable… just say “yes”)

I think preupgrade shouldn’t be involved.

@chenyan, can you confirm whether this is the right way to go about this? For the record, I’ve never done this and if I were trying it, I’d definitely test it locally before doing it in mainnet.

Also, you need to make sure you have enough canister memory to do this, as it involves double the size of JournalEntry… so not ideal.

I’m hoping the stable compatibility restrictions can be relaxed at some point. Adding fields to records seems like a very common use case.

4 Likes

Given { stuff }, { stuff; extra: Bool; } is indeed a subtype, but { stuff; extra: ?Bool; } is a supertype, if I remember correctly. As long as the fields are optional, you can add more of them and they will be populated with null on upgrade.

2 Likes

If that’s the case, that makes life so much simpler. I’ll give that a try.

How would I go about defining the upgrade hooks to update the stable variable? and whats the procedure for performing the update?

You would not need an upgrade hook. Defining the existing stable variable as the new type is enough.

Do you have a reference to an example?

I have no such example for Motoko, but in Rust, in the cycles wallet, we recently added a new field to a data type we were storing in stable storage, and the only special thing we do in the post-upgrade hook is copy data into it (because it’s just an index of existing data).

1 Like

Taking a look now. Thank you so much

Rust is foreign to me at the moment. Where exactly are the preupgrade and postupgrade hooks defined in the example you sent me? and which data type is that had the new field added to it.

edit: I just re-clicked the link you provided. It took me directly to the data type. Thank you!

Yes, it is unfortunate that you cannot add a new field (doesn’t matter if it’s optional or not) to a stable record, because the stable typing follows the Motoko subtyping not the Candid subtyping rule.

To add a new field, you will need to define a new stable variable and copy the old data over to the new stable var. This come be done either through the postupgrade logic or in the initialization if the data structure is simple enough. For example,

stable var old_record = { a = 42; };
stable var new_record = { a = old_record.a; b = "default_value_for_new_field" };

This is not ideal as you point out: 1) it doubles the memory usage; 2) it’s simply not convenient, as adding a new optional field is very common.

We choose this approach, because it’s the least effort at the moment to prevent data loss during upgrade statically. In the long term, we probably need to design a new serialization format for stable variables to fit the use cases better.

3 Likes

Another relevant link that tries to explain this is

I was actually able to add a new optional field to the stable variable. it was pretty easy. I didn’t have to define any upgrade hooks either.

The stable variable that I have is

    private stable var journal : Trie.Trie<Nat, JournalEntry> = Trie.empty();

where JournalEntry was originally structured like so:

type JournalEntry = {
        entryTitle: Text;
        text: Text;
        location: Text;
        date: Text;
        lockTime: Int;
        unlockTime: Int;
        sent: Bool;
        emailOne: Text;
        emailTwo: Text;
        emailThree: Text;
        file1MetaData: {
            fileName: Text;
            lastModified: Int;
            fileType: Text;
        };
        file2MetaData: {
            fileName: Text;
            lastModified: Int;
            fileType: Text;
        };
    }; ```

and I added a new optional read field. so the data structure was updated to looked like this:

type JournalEntry = {
        entryTitle: Text;
        text: Text;
        location: Text;
        date: Text;
        lockTime: Int;
        unlockTime: Int;
        sent: Bool;
        emailOne: Text;
        emailTwo: Text;
        emailThree: Text;
        read: ?Bool;
        file1MetaData: {
            fileName: Text;
            lastModified: Int;
            fileType: Text;
        };
        file2MetaData: {
            fileName: Text;
            lastModified: Int;
            fileType: Text;
        };
    }; 

one thing you have to be careful of is that if you are doing any hardcoding for values of an optional field, you have to be sure to but the ? in front of the value. otherwise, you’ll get an error when you try to deploy. so in my case, I was hardcoding some values for the read boolean field as false and was getting errors when trying to deploy. The proper way to hardcode the value was like so: read = ?false.

and that did the trick.

I was able to update the data structure and deploy it without getting an error, but when I look at whats returned from the backend, the new field isn’t present. Is there a specific dfx command i need to run after updating data structures for the changes to take effect after deploying to the IC? The command I ran was dfx deploy --network ic <canister-name>

I figure out why the new field wasn’t returning from the backend. I deployed the backend but didn’t redeploy the frontend since I hadn’t changed anything on the frontend. but apparently i still needed to redeploy the front end in order for the new field to show up from the backend.

Interesting! That’s really surprising to me, as that doesn’t seem documented here.

@chenyan @claudio Do you mind confirming if this is intended? Thanks!

1 Like

The behaviour is expected, but only in the current implementation, which uses an extended form of Candid to preserve stable variables across upgrades.

It is undocumented behaviour because we don’t want people to exploit/rely on it. Candid is not actually a good format for the purpose of storing stable variables and we would like the liberty to replace it with a better stable variable format in the future.

One of the reasons Candid is a poor choice is that it does not, and cannot easily, preserve sharing of immutable data. This means that, for example, in memory DAGs can be linearized to trees on serialization, duplicating data and blowing up memory use.

2 Likes

It is documented in the Candid spec: candid/Candid.md at master · dfinity/candid · GitHub

Its purpose is not only stable variables but also API evolution - adding new optional parameters to functions, for example.

1 Like

@AdamS, @claudio, adding the new field as optional works for getting the canister to deploy with no errors, but for some reason, that field always comes back as null. no matter what argument you pass in as the value.

Would the upgrade hooks allow me to iterate over a stable Trie and with each iteration perform the following:

1.) copy the data from the old Trie over to the new Trie
2.) delete the data from the old Trie right away?

I’m thinking of going with the strategy @jzxchiang suggested, but I don’t want it to require double the storage in order to migrate the data. I’d rather migrate each key/value pair and then discard them immediately to limit the storage usage.

1 Like

Unfortunately not.

The Motoko GC will only kick in at the end of a message, so deleting entries incrementally won’t actually free up any space until after the upgrade has completed.

Creating the new Trie and then just discarding the old Trie might be easier, if you’ve got the space.

Another option might be to store the new data in a separate Trie, with the same keys, so you don’t need to copy the existing data, but that introduces it’s own overhead.

3 Likes