Intermittent, unusual increases in cycles balance when `canister_status` is called

In monitoring canister cycles balances that have relatively low burn rates, we’ve been able to notice a few, unusual cases where intermittently the cycles balance returned by canister status will temporarily spike (increase) by up to 15 billion cycles.

Here is a chart of one of these canisters’ balances over the period of roughly a month. A data point is collected once every 6 hours.
Screenshot 2024-02-05 at 17.15.41

With the 3 increases occuring at:

{ timestamp: 1705107635371060626n, cycles: 7387020354216n } increase/spike of 7 billion
{ timestamp: 1706079629968944994n, cycles: 7326519789119n } increase/spike of 5 billion
{ timestamp: 1706706028101721978n, cycles: 7391645817979n } increase/spike of 14 billion

Here is the data that produced this chart, with the timestamp in nanoseconds and the corresponding cycles balance:

[
  { timestamp: 1704826830565978405n, cycles: 7330519194317n },
  { timestamp: 1704848430511420082n, cycles: 7329639761926n },
  { timestamp: 1704870031181344569n, cycles: 7328733476652n },
  { timestamp: 1704891631219955638n, cycles: 7327844990938n },
  { timestamp: 1704913242105106075n, cycles: 7326938401619n },
  { timestamp: 1704934830350820021n, cycles: 7326051089895n },
  { timestamp: 1704956435176921816n, cycles: 7325183225006n },
  { timestamp: 1704978030607479972n, cycles: 7324307401286n },
  { timestamp: 1704999630737234957n, cycles: 7323419429608n },
  { timestamp: 1705010849711557780n, cycles: 7322537024391n },
  { timestamp: 1705021229217004545n, cycles: 7321651365247n },
  { timestamp: 1705042838343242611n, cycles: 7320745730142n },
  { timestamp: 1705045173447046956n, cycles: 7319889260160n },
  { timestamp: 1705050809944683072n, cycles: 7319007362706n },
  { timestamp: 1705064428958065704n, cycles: 7318112058478n },
  { timestamp: 1705086045564881399n, cycles: 7317239031164n },
  { timestamp: 1705107635371060626n, cycles: 7387020354216n }, // increase/spike
  { timestamp: 1705129229283627807n, cycles: 7315487038184n },
  { timestamp: 1705150834019949865n, cycles: 7314639411213n },
  { timestamp: 1705172430570821359n, cycles: 7313769863397n },
  { timestamp: 1705194031811091569n, cycles: 7312883475634n },
  { timestamp: 1705215633259316082n, cycles: 7312008092550n },
  { timestamp: 1705237231157906180n, cycles: 7311111969031n },
  { timestamp: 1705258833245087036n, cycles: 7310242699749n },
  { timestamp: 1705280438599777882n, cycles: 7309354010953n },
  { timestamp: 1705302029825950469n, cycles: 7308473307704n },
  { timestamp: 1705323633155722331n, cycles: 7307589190656n },
  { timestamp: 1705345231713724722n, cycles: 7306691255154n },
  { timestamp: 1705366830643485943n, cycles: 7305810101875n },
  { timestamp: 1705388431379851082n, cycles: 7304913739603n },
  { timestamp: 1705410032047441536n, cycles: 7304013618496n },
  { timestamp: 1705431632098178707n, cycles: 7303114057359n },
  { timestamp: 1705453228123632012n, cycles: 7302204386133n },
  { timestamp: 1705474832692446454n, cycles: 7301325700724n },
  { timestamp: 1705496430907435058n, cycles: 7300419758014n },
  { timestamp: 1705518030137897215n, cycles: 7299513972834n },
  { timestamp: 1705539631754620035n, cycles: 7298628774255n },
  { timestamp: 1705561231954960191n, cycles: 7297731870979n },
  { timestamp: 1705582830169468595n, cycles: 7296812446044n },
  { timestamp: 1705604429422243231n, cycles: 7295901323575n },
  { timestamp: 1705626031365465624n, cycles: 7294982177308n },
  { timestamp: 1705647631179951086n, cycles: 7294072359899n },
  { timestamp: 1705669252490629119n, cycles: 7293152995488n },
  { timestamp: 1705690833189675560n, cycles: 7292263788969n },
  { timestamp: 1705712429187384728n, cycles: 7291353357198n },
  { timestamp: 1705734030450355452n, cycles: 7290442320904n },
  { timestamp: 1705755633683191851n, cycles: 7289582956741n },
  { timestamp: 1705777227457989705n, cycles: 7288670133565n },
  { timestamp: 1705798827959446470n, cycles: 7287759597496n },
  { timestamp: 1705820429973253728n, cycles: 7286857012216n },
  { timestamp: 1705842029463660705n, cycles: 7285941502196n },
  { timestamp: 1705863630000561513n, cycles: 7285029315231n },
  { timestamp: 1705885229588384282n, cycles: 7284130990062n },
  { timestamp: 1705906847095061549n, cycles: 7283210508033n },
  { timestamp: 1705928429820475892n, cycles: 7282310035505n },
  { timestamp: 1705950031850770541n, cycles: 7281406903902n },
  { timestamp: 1705971627925922593n, cycles: 7280543761634n },
  { timestamp: 1705993231273498820n, cycles: 7279671842349n },
  { timestamp: 1706014830559760841n, cycles: 7278760783235n },
  { timestamp: 1706036428402233832n, cycles: 7277849194587n },
  { timestamp: 1706058029626569182n, cycles: 7276947893000n },
  { timestamp: 1706079629968944994n, cycles: 7326519789119n }, // increase/spike
  { timestamp: 1706101228639767676n, cycles: 7275152545839n },
  { timestamp: 1706122830653500541n, cycles: 7274277715895n },
  { timestamp: 1706144440596576718n, cycles: 7273382459630n },
  { timestamp: 1706166027757156755n, cycles: 7272543609786n },
  { timestamp: 1706187631813284892n, cycles: 7271640478156n },
  { timestamp: 1706209232787961583n, cycles: 7270743858706n },
  { timestamp: 1706230829818690393n, cycles: 7269829645455n },
  { timestamp: 1706252429220751510n, cycles: 7268936009399n },
  { timestamp: 1706274029921950020n, cycles: 7268041628465n },
  { timestamp: 1706295629627607476n, cycles: 7267117894298n },
  { timestamp: 1706317228639488580n, cycles: 7266203175596n },
  { timestamp: 1706338828184569995n, cycles: 7265318175906n },
  { timestamp: 1706360429107511959n, cycles: 7264424161952n },
  { timestamp: 1706382030408936764n, cycles: 7263544717278n },
  { timestamp: 1706403628693374576n, cycles: 7262674818774n },
  { timestamp: 1706425229248927979n, cycles: 7261793185399n },
  { timestamp: 1706446829691123066n, cycles: 7260947040235n },
  { timestamp: 1706468427771968621n, cycles: 7260059411392n },
  { timestamp: 1706490029749512991n, cycles: 7259172248395n },
  { timestamp: 1706511628944554657n, cycles: 7258296096511n },
  { timestamp: 1706533235329168064n, cycles: 7257406078881n },
  { timestamp: 1706554831112643136n, cycles: 7256472933751n },
  { timestamp: 1706576429646499566n, cycles: 7255562768115n },
  { timestamp: 1706598031981631057n, cycles: 7254677881473n },
  { timestamp: 1706619629020480420n, cycles: 7253796635606n },
  { timestamp: 1706641231019031071n, cycles: 7252924145571n },
  { timestamp: 1706662836043040673n, cycles: 7252039749886n },
  { timestamp: 1706684430963244639n, cycles: 7251145731241n },
  { timestamp: 1706706028101721978n, cycles: 7391645817979n }, // increase/spike
  { timestamp: 1706727628880405970n, cycles: 7249367795524n },
  { timestamp: 1706749233406383108n, cycles: 7248419371298n },
  { timestamp: 1706770828593034142n, cycles: 7247465979284n },
  { timestamp: 1706792432874953980n, cycles: 7246517514372n },
  { timestamp: 1706814029275642313n, cycles: 7245568487490n },
  { timestamp: 1706835629582252821n, cycles: 7244662722737n },
  { timestamp: 1706857229648973232n, cycles: 7243741870300n },
  { timestamp: 1706878829583311575n, cycles: 7242775387824n },
  { timestamp: 1706900431672572063n, cycles: 7241829417243n }
]

I am fairly confident that no one is topping up this canister or any of the others, especially since the cycles balance reverts to projected burn (line) after the intermittent spike.

Is there a reason that might explain why the cycles balance value returned from canister_status sometimes spikes like this?

1 Like

Outstanding messages reserve cycles so that the response can be processed for sure. When you then process the response and don’t make any further calls then the reserved cycles are released. When that happens the available cycles can increase again. If you have multiple awaits waiting for a response that you then process almost at the same time, then the balance can jump quite a bit.

3 Likes

When the messages reserve cycles, are those cycles temporarily debited from the calling canister into the receiving canister?

In this case, the canister status API is called, which proxies its request through the management “canister”. So are these temporary spikes in reserved cycles coming from the canister that is making calls to the original management canister?

Canister A —call with cycles—> management canister —forwards cycles to—> Canister B

Then after the call the cycles are returned recursively?

No, the cycles are not sent along with the call. They are only used to pre-pay for processing the response you receive. The callee canister pays for its own execution. If you attach cycles to the call you make, then that’s a separate charge.

Assuming you don’t explicitly send cycles it would look roughly like this:

  • canister A gets called
  • before execution, A gets charged the max amount of cycles that call could consume (20B instructions times 0.4 cycles per instruction is 8B cycles)
  • call gets executed, using, say, 2.5B instructions (which costs 1B cycles)
  • canister receives 7B refund (8B max cost minus 1B that have been used)
  • as a result of the call, A tries to call the management canister. To make that call, cycles for processing the response need to be pre-paid. A gets charged 8B cycles
  • charge can be paid, so the call actually happens
  • what happens now does not matter
  • A receives the response
  • Processing the response is pre-paid, so no charge here
  • processing the response uses 5B instructions (so 2B cycles cost)
  • A receives 6B refund (8B max cost minus 2B actual cost)
  • response gets returned to whatever called A

My best guess is that the spikes happen when suddenly you are making less calls to other canisters. Is that a plausible explanation?

2 Likes

Makes sense, canister A temporarily dips to cover any cycles costs of the call, and is then refunded afterwards.

What’s happening in this specific case is a bit different though. In this case, canister A is a controller of canister B, and is monitoring canister B through the canister_status API of the management canister. No cycles are given to canister B during the monitoring period.

The spike occur on a few different canisters, and as shown in the chart are intermittent, and happen at different times for different canisters.

The canister with most evident spikes (shown at the top of this thread) is actually our blackhole canister responsible for monitoring. While it has several calls in flight to the management canister, it simultaneously sends out a call to check its own status and cycles balance through the management canister.

So it looks like this:

  1. Canister A —inter-canister call to kick off monitoring—> monitoring canister

  2. Monitoring canister —100s of parallel canister_status calls with other canister ids + own canister id—> management canister

  3. management canister —checks status----> all canister ids (including monitoring)

Sometimes, when the status of our monitoring canister is returned - the cycles balance data point falls right in line (monotomically decreasing). Given the scenario you’ve described, I’m not sure how while making 100s of inter-canister calls, the monitoring canister could be credited cycles and end up with a balance above its resting balance of 6hours prior.

Maybe @dsarlis has some ideas?

If you’re still not convinced it’s because of the refund logic, you need to send the code of the canisters so we can take a look and point exactly how the refunds can happen in a way that you see this behaviour.

1 Like

@icme It will be helpful in searching for explaining this behaviour if you can give us Canister ID as well.

1 Like

Invited you to an investigation DM chain. :slightly_smiling_face: