I took a snapshot of one canister that had my data and tried loading it into a new empty canister, and I got an error. I wanted to basically create two canisters with the same data and code. Is this solution possible, or snapshots can only be loaded back into the canisters where they were taken from?
It is not possible to load a snapshot taken on another canister. Snapshots are strictly canister-specific, meaning a snapshot from one canister can only be restored into that same canister. The snapshot functions work as follows:
take_canister_snapshot
: Creates a snapshot of the canister’s state.list_canister_snapshots
: Lists all available snapshots for the canister.load_canister_snapshot
/delete_canister_snapshot
: Loads or deletes a snapshot from the list of available snapshots.
Since you cannot load a snapshot from one canister into another, to have two canisters with the same code and data a developer would need to deploy identical code to both canisters and implement a data export/import or migration process. For instance, the original canister could include functionality to export its state (eg. as binary blob) that the new canister could then import during its initialization.
If snapshots are needed as a backup strategy, each canister should take regular snapshots so that its state can be restored if required.
The feature you want is being worked on, but in the meanwhile, Maksym’s reply applies.
Thank you for the response. I will now consider other backup implementations.
@maksym Interesting, I also thought snapshots were backups that could be restored to any canister, but apparently for now only to the target canister.
For a Rust canister what general recommendation you could give to save the data off-chain to a local hard drive? The code after all we do already have.
Thanks,
Joseph
For a Rust canister what general recommendation you could give to save the data off-chain to a local hard drive?
Backup strategies depend on your application’s specific needs – there’s no one-size-fits-all solution.
For small datasets, regular full backups or snapshots can work well. Snapshots are quick and efficient, even for large states (for example, a Bitcoin mainnet canister with over 100GB of state can create and restore snapshots in seconds).
For larger datasets stored off-chain, you might need to either accept the transfer time for full backups or implement incremental backups that only send the delta since the last backup. Your choice should take into account the size of the data and the acceptable level of potential data loss.
Thanks for the advice.
What suggestions can you give about implementing the data backup itself?
Snapshots are convenient, but they do not allow to recover from catastrophic failure, one that requires the app to move to a new canister.
I am guessing that serializing the data structures, and transferring to disk is the answer, but of course this would have to be built. Any advice on that path?
That’s a good generic approach, the details depend on specifics of the app.
For a robust backup strategy one should implement an export/import mechanism that serializes the state and writes it to off-chain storage.
Important points to consider: full vs incremental backups (depending on export time), chunking data (also exporting time), verification and security (to validate integrity and safety of data).
To speed up export time you can use queries to read data (higher download speed), but make sure to verify the integrity to protect against ‘malicious node’ attack.
One example of how different the backup strategy can be – Bitcoin canister.
In case of disaster recovery (eg. it got stuck on a wrong branch) it has to start syncing blockchain from the genesis block, which for Mainnet is >882k blocks and >100GB and will likely take weeks to accomplish. The solution here is to precalculate the state offchain and install 2 different Wasm modules: (1) uploader canister, that will upload the state into stable memory (2) the actual Bitcoin canister Wasm without any uploading mechanism.
With this trick the recovery takes 2 nns proposals (to install Wasm modules) and a couple of days to upload pre-calculated state, which is much faster than a couple of weeks of syncing from genesis.
Thanks for the suggestion.
This does look like several months of coding, are there any plans for DFINITY to build this?
In the meantime I appreciate your advice, and will eventually implement this mechanism.
The difficulty involved also makes the snapshot functionality way more valuable! Thanks for doing that work.
Have a great weekend!
Joseph
As Michael said:
The feature you want is being worked on…
We are working on the feature for downloading/uploading snapshots.
Fantastic news, this is encouraging!