With Motoko, is it possible to serialize any type of object into a Blob so that it can be sent to another canister in chunks and then deserialized on the other end? Thank you.
Why do you need to chunk an object?
Are you worried you’ll hit the message limit? I believe those only apply to ingress messages (from users) and not inter-canister messages (from canisters).
Yes, that was one of the concerns. I am happy to hear that the transfer limit does not apply to cross-canister calls. Thank you for clarifying.
The other reason is to wrap objects into a generic format for on-chain backup of one or more dapp canisters. A common data type with a Blob field and some header information would enable the decoupling of a backup canister. Otherwise a backup canister would need to mirror a dapp canister’s stable signature and both canisters would need to be updated together. This defeats the purpose of protecting against accidental data loss during deployments.
Yes, that was one of the concerns. I am happy to hear that the transfer limit does not apply to cross-canister calls. Thank you for clarifying.
Probably better if someone from DFINITY confirms this.
The other reason is to wrap objects into a generic format for on-chain backup of one or more dapp canisters.
I think there’s an upcoming feature to let developers download the state of their canister to somewhere offline and off-chain. For this exact use case of protecting against data loss during canister upgrades.
Thanks. I heard about the canister download feature. However, I have a requirement for automatic, incremental, on-chain backups. This method protects user data by ensuring it never leaves the security of the IC.
With the system heartbeat feature coming soon, the only missing piece, as far as I know, is binary serialization for a generic storage format.
That’s an interesting use case that I feel like many devs would be interested in!
Maybe @claudio might know the answer to this.
There’s a super-type, Any
. You could make your backup canister accept objects of type Any and then send it any sharable type from the canister that is being backed up. That way your backup canister doesn’t have to be upgraded in sync.
I think there is a size limit though but I let others confirm what it is.
But not sure if Any
solves your problem because how do you get the data out again for recovery in a meaningful way?
I assumed there was a way to down-cast a more generic type. Is that not possible?
I just tried to down-cast and it doesn’t seem to be supported. Does anyone know how to convert an instance of Any to an instance of Person in this example?
I’m afraid you can’t. It’s a black hole
That would be unfortunate, but there must be some workaround.
I see in the documentation: “no dynamic casts”.
That leaves me wondering why the Any type exists and how it is used.
My best solution for backing up data to a generic format is as follows:
- Create a custom mapper function for each type that copies the field names and values to a tuple array of [(Text, ?Text)]. Child objects/arrays will be flattened using a composite key to indicate the hierarchy.
- Create parser functions for each base type (Nat, etc.) so that the Text values in the tuples can be converted back to their original types.
- Send the tuple array to a backup canister where it is added to a HashMap with a composite key of identifiers for the origin canister, data type and instance id.
Although data values require more memory to be stored as Text (rather than byte arrays), this is slightly more efficient than JSON and runs less risk of parsing errors. The implementation is also simpler. Tuple arrays can be sent between canisters (unlike HashMaps) and they are typically the format used to persist stable variables during upgrades. This makes them a nice backup format.
Without support for reflection or dynamic type casting, I believe this is the best solution, but it requires custom mapper functions for each type. On the bright side, this also offers flexibility for handling backward compatibility with older versions of your data types. Obsolete fields can be ignored. Old field names can be mapped to new field names, etc.
Any feedback or logic improvements to the above would be greatly appreciated. Thank you!
ICP is the future and us luck ones who are in, especially everyone who paid less than $50 each are going to be in such a good position as 2022 is going to be the year that shorts cover, which will shoot us to $250 and form a new base to take us to $1000 per ICP in 2022
Hey @KennyBoyNYC. I’m honored that your first post was in my thread, even if it’s a bit off topic. I won’t speculate on price, but let’s just say that $ICP is my primary long-term investment.
If you’re not a web developer yet, now is a good time to learn. Full-stack = React + Motoko.
Ah. I’m honored. I’m not a developer but I heard amazing things bout Motoko
Let’s take a step back. If you store a lot of data in your canister then for save upgrades (to guarantee that the upgrade will never fail due to exceeding the cycle limit per execution) you will likely need to store data directly in stable memory with the low-level interface.
Links:
Example:
For the primitive types there are serialization functions provided in the ExperimentalStableMemory package that write the value directly into stable memory. For your own composite types you would have to write those serialization functions yourself. But hopefully the data types for your essential data that you need for disaster recovery can be kept simple.
If you do use the stable memory in that way already then it makes sense to use the serialization already present in the stable memory for your backup solution. Note that the data being in stable memory is already a form of backup because stable memory survives upgrades and if an upgrade fails because the pre/post upgrade hooks trap then the stable memory cannot be corrupted. So you are left to protect yourself against cases where the upgrade succeeds but the new code is buggy so that it corrupts your data after the upgrade.
To back up the data in the stable memory I would just take the entire memory pages and ship them off as Blob to another canister. That way the type transmitted is always Blob and the backup canister can be completely agnostic to the content and the types used in the content (as you wanted). You should be able to send at least 2 MB in a single inter-canister message. With the stable memory being organized in memory pages of 64kB you can then ship 32 pages at once.
Since you mentioned Text. If you have Text you can convert it to Blob with Text.encodeUtf8 first and then write it to stable memory with storeBlob.