try {
switch current_archive {
case null throw Error.reject("No archive canisters");
case (?archive) await archive.save(data);
}
} catch e {
// whatever the error (storage full?), just create a new canister
// because the data insertion must happen no matter what
Cycles.add(1_000_000_000_000);
let new_archive = await Archive.Canister(data);
current_archive := ?new_archive;
}
If the archive is truly full then yes this is fine. But I would suggest that you don’t just blindly create new archives if any error happens. I would rather suggest that your save
function in the archive can return ArchiveFullError
and only in that case you actually create a new archive. Since you create the archive with ‘only’ 1T cycles I would guess that the most likely reason for storing to fail would be that the archive runs out of cycles and gets frozen
Yea I would like to create a #Err(#InsufficientStorage)
but as per our previous convo, there’s no such way to figure out how much % of the storage is being used if I don’t hardcode the limit. So above is just me trying to design a flow without the knowledge of canister’s max storage limit. Let me know if this is a fool’s errand or hardcoding the limit is just better.
I see. Yea the 1T is just an example as I’m developing on local env, but thanks for the heads up. Tho even if I created the old canister with 5T for example, soon when it runs out of cycles or frozen, I will still need to create a new canister to store the new data, because like i said above, the data insertion must not be stopped no matter what.
I think so, at least for now. I would hard-code 350GiB for now and if you get even remotely close to that limit then you start thinking about it again