Deleted snapshot memory's affect on reserved cycles

I recently created a snapshot, and saw my reserved cycles usage rise by an expected amount.

However, after deleting this snapshot, my canister still retains the same reserved cycles usage (counting towards my reserved cycles limit).

Is this the intended behavior?

Yeah that’s how the reservation mechanism works. Even if you use memory for a short period of time, the cycles stay reserved so that you can’t cheaply block the subnet for a short time interval.

4 Likes

Hey @abk Just ran into this again. It seems that when we delete a snapshot and then try to create a new one, instead of reallocating that same snapshot memory that was already reserved and paid for it tries to allocate brand new reserved cycle memory for the new snapshot same canister.

Here’s the series of events:

  1. Take snapshot (reserved cycles are utilized)
  2. Delete snapshot (cycles stay reserved, as per mechanism)
  3. Several weeks later, take a new snapshot (requests brand new reserved cycles for the entirety of the snapshot instead of utilizing existing, already paid for memory).

We can detect that our reserved cycles requirement has been going up steadily, even with minimal application memory usage increases. Our canister memory (without snapshots) has been around ~500MB, and the remaining 500MB are snapshots that we’ve deleted. Over the past few months, our reserved cycles metrics has been increasing a ton with each new snapshot, from 4T, to 12T, to 22T, and now to 32T with the latest snapshot, even though our canister memory footprint hasn’t increased and we delete each snapshot before creating a new one.

Reserving the cycles up front makes the reservation more costly already, so why aren’t canisters able to reuse the snapshot memory that they’ve already reserved?

Or is the implementation such that it doesn’t allow snapshot memory to be reused in terms of reserved cycles?

1 Like

You were pointed to the replace_snapshot option of take_canister_snapshot, right?

1 Like

Yes thanks for following up, I was pointed to this late last week! Here’s my understanding of what was happening (hopefully this is helpful to others that run into this issue).

Previously, I was deleting the snapshot, and then creating a new one in a separate action afterwards. I learned from @dsarlis that when doing delete & create in two separate actions (not atomic), the reserved cycles from the previous, now deleted snapshot are effectively lost, and the new snapshot requires a completely new payment of reserved cycles. This DX could be improved, but has additional complexity and for that reason has not been prioritized with respect to snapshots.

However, if I don’t delete the snapshot first, and instead just call create snapshot and provide the replace_snapshot parameter (--replace option in dfx), I’ve been told that I won’t lose the reserved cycles, and instead the new snapshot will reuse the previous snapshot’s reserved cycles, +/- the size diff of the new snapshot relative to the one it replaced. This is possible because the replace snapshot operation is atomic, which makes memory allocation accounting much simpler.

3 Likes