Cycle burn rate heartbeat


Can somebody explain why the heartbeat consumes so much cycles, even when the function body is empty it burns through 3_000_000 on every beat.

Just a explainer what i am trying to do / how i set it up;

I am trying to set up self-sustaining canisters-system where each canister can request cycles from a management canister if their current cycles are below a certain threshold.

I first implemented a heartbeat in every canister, but after seeing what they burned i moved all the logic over to the management canister.

In the management canister i loop over all the canisters and do a inter-canister call to see if they need to be topped-up, these inter-canister calls consume about 1_000_000_000 cycles each and are checked every 24 hours.

async fn check_canister_cycles()
  let day = 1 * 24 * 60 * 60 * 1_000_000_000;
  if time() > last_triggered_time + day { 
    // do calls

So i’m kind of trying to find the sweet-spot, when managing the heartbeat in the management canister the inter-canister calls consume a lot of cycles, which would increase by every canister that is added dynamically. But when doing it in the canisters themselves it consumes 3_000_000 every beat but doesnt require a inter-canister call until it reaches the threshold.

Is there maybe a other(better) way to handle this? Maybe something that gets triggered once a call fails because it doesnt have enough cycles to handle the request?


I wonder if the heartbeat needs to consume so many cycles, or whether it is a side effect of the particular implementation. I’ve heard this complaint of high cycle consumption quite a number of times.

@PaulLiu, do you happen to know? (Sorry for the ping, but I believe you may have implemented the heartbeat in Motoko. Can you tag the person who implemented the heartbeat in the replica?)

1 Like

I wonder if just having some kind of “manual trigger” mechanism in place instead of heartbeat for the apps that don’t need checks so often would be a viable way of going around of using heartbeat for “cron like” things that need to be occasionally done inside of the canister?
ie: have a method that does not trigger on a heartbeat, but instead it’s a regular method that only allows certain caller to trigger it (an admin for example) and it then does the same thing a heartbeat will. It can then be perhaps done manually by whoever needs to do it and however often it needs to be called, or some kind of a web2 app could be created just for the sakes of calling the canister method that acts as a hearbeat every so often?

1 Like

Yeah I’m thinking of going that route if necessary. I would prefer to have everything done on the Internet Computer though…

Yeah creating a webworker on digital ocean for example also crossed my mind, but as @jzxchiang mentioned, i would rather do it al in a decentralized matter on the IC.

It would be nice if we could only trigger the heartbeat call every x amount of beats for example. That would drastically reduce the cycles spent. And for a functionality like checking and updating the cycles it isnt really bound to precise timing.

so something like
#[heartbeat(trigger_at = 60)]

Tbh I see no issue with “not going the IC way” for this specific thing as it does not, in any way, defeat the purpose of the blockchain in the first place. The things that should be decentralized should be added into the blockchain, but things like these, imo are not that important as they are not the source of truth or data itself that needs to live and be 100% legit and validated by nodes.

There is a way to trigger something only on a specific amount of beats in a way… You can find the example that does that exact thing in the documentation. I will paste the code here for reference.

import Debug "mo:base/Debug";

actor Alarm {

  let n = 5;
  var count = 0;

  public shared func ring() : async () {

  system func heartbeat() : async () {
    if (count % n == 0) {
      await ring();
    count += 1;

This code would run on every 5 heartbeats. So on 5th, 10th, 15th etc


yeah your example is exactly the same as in the message that started this thread, only i used time instead of a counter to prevent updating the state. But this still triggers the heartbeat and burns cycles.

So to take your example

system func heartbeat() : async () {}

This empty function still burns cycles every time the heartbeat is triggered

So the counter should be on higher level then the heartbeat itself.

1 Like

Oh I see :slight_smile: Makes no sense to burn it every time if it’s not used at all :slight_smile: Yeah… I guess the only way around this rn would be to do what I mentioned earlier perhaps and go around heartbeat completely.

Just waking up your canister every second is work, actually a lot of work. It is not heartbeat that is charging more, but try sending your canister 1 message every block, not cheap either!

For more flexible scheduling, you could use something like IC Cron. I’m not sure if @senior.joinu (author of IC Cron) is offering a deployed IC Cron canister as a public service for everyone. But as a community, if enough people find it useful, you can collectively fund such a public service (e.g. using tip jar) in an open and transparent way that benefits everyone.


If you have occasional update calls to your canister, you could also check your balance during these calls, and call your management canister to top up if your balance is below a threshold.

This could be a useful mixed, i.e. legacy and IC, infrastructure service that canisters on the IC could use.

The service consists of a canister that can be paid to watch the balance and top up other canisters and a simple cloud function that checks the cycles of the registered canisters.

It definitely takes some work, but the proportions just seem off.

0.5 TC a day works out to almost $0.70 per day, which is roughly $21 per month or $252 per year.

That means heartbeat (a simple version) is 50x more expensive than storing 1 GB of data for a year on the IC.

I definitely agree that a public heartbeat canister that anyone can subscribe to and whose cycle costs are split among its subscribers would be very useful, but I don’t think the community will be able to organize around such a service in the near future…


Yeah also thought of this, but issue is that my implementation is for a self-scaling storage solution, so for example;
there is a storage canister, once it reaches a certain memory threshold it locks up for any further update calls and spins up a “brother” canister. So eventually the update calls will stop once the canister is full.

I think the only way your solution would work is to make every query call an update call, or can a query call internally call an update call? :thinking:

so eventually the update calls will stop once the canister is full.

Ah, I see.

or can a query call internally call an update call?

No, it can’t do any inter-canister call.

Since inter-canister update calls take awhile, I wonder if there’s a hacky solution in here where you can set up a canister to recursively call other canisters, and only repeat the inter-canister call once the calling canister has received a response (2+ sec).

This “heartbeat” wouldn’t be as predictable in terms of timing, but would most likely be less expensive.

haha this sound like worth trying, but my assumption is that it would be pretty expensive.

If i understand you correctly it would be something like

--storage canister--
fn get_cycles() {

--storage management canister--
fn request_cycles() {
   storage_canister_get_cycles().await // based on caller

And then maybe work with some kind of delay solution

Yes, or even self-calls although that is less safe - IC Cron - let's schedule some tasks bois - #3 by nomeata

1 Like

It’s way cheaper to do this instead of using heartbeat. I moved to running a script on a server that calls an upgrade method on my canister every 2 seconds because of costs. It would be nice if we could reduce heartbeat costs to make it as cheap as calling the canister from the “outside”.


I have this hacky system I’m my ICDevs bounty roadmap. A dao that owns this “scheduler” public utility would be a fun project.

Maybe I’ll code it up for supernova.

Interesting problem and needed data:

How many fire and forget intercanister calls can fit into one round of consensus?

Do we need guaranteed delivery? Rouge canisters probably break in line processing of returns….would need to use @nomeata ’s upgrade pattern.

How do the economic play out? How much for 10,000 notifications, etc.