Scalable Backup/Restoration Best Practices for Motoko?

Is there an upcoming scalable backup/restoration solutions? or is there one already?
I’m asking because we all know preupgrade and postupgrade has number of instructions limitation and stable memory has the size limitation.

for new devs that dont know:
https://wiki.internetcomputer.org/wiki/Current_limitations_of_the_Internet_Computer
https://www.joachim-breitner.de/blog/788-How_to_audit_an_Internet_Computer_canister

Some cool stuff coming for sure.

In the meantime, look at this migration pattern…maybe consider applying it at an object level via an accessor function and only upgrading objects as needed: GitHub - ZhenyaUsenko/motoko-migrations: Sample project structure to implement migrations in motoko.

2 Likes

hi @skilesare
i noticed that developer wrote a motoko-hash-map repo? does that mean if I use this repo, I dont have to write the pre/postupgrade()? Since he wrote that Stable Hashmap for Motoko, I can restart/upgrade the canister and the data in the hasmaps/sets will still be there?

also, can I use his set within his hashmap? like this Map<Principal, Set<Nat64>>?

===

anyway
i’m thinking about making a public shared func stop() where only the developer/admin can call it, to return a #Stopped error to all update calls past this point. Then after all ongoing calls are done, then the dev can public shared func backup() : Text return a json in chunks or something. After the backup is done, restart the canister, then public shared func restore() in chunks. What do you guys think?

In your first part: yes it can be used as a stable!

Second: sure!

Third: this is basically what we do for the origyn governance and the origyn nft. If you check out the repo you will see a halt variable and a backup function. GitHub - ORIGYN-SA/origyn_nft: The Public Facing Origyn NFT Reference Implementation

1 Like