Heartbeat improvements / Timers [Community Consideration]

Question 1: This may be a dumb question, but I can set many timers correct?

let timer1 = setTimer( 365 * 24 * 60* 60 *1000000000, sayHappyAnniversary);
let timer2 = setTimer( 365 * 24 * 60* 60 *1000000000 * 25 , sayHappySilverAnniversary);

Question 2: Do the timers persist across upgrades? If yes, what happens if you take the function out of your class?

Question 3: Would there be any benefit to having an async* pathway here?

I think its this sentence but I thought it was in a more clearer wording in the docs. Internet Computer Content Validation Bootstrap

Also, I discussed this in this post Question regarding timing of intercanister update calls during heartbeat - #2 by skilesare

It was mentioned in the description here

Q1: yes
Q2: no persistence, you have to set up your timers in post_upgrade (this is explained in PR 3542)
Q3: yes, there is an issue for that

1 Like

This only means that the heartbeat system function is invoked via await, which adds some lag. But there is no blockage. @claudio how to explain this better?

What is also interesting is that, apparently, you cannot clear_timer out of the target function.

So in my use case of providing streaming local backups of memory managed stable stores, i have to launch another timer to cancel the timer (timer interval) of the main function that preps the backup.

You mean the function call has no effect, or it’s about passing timer_id around?

In general, a function that knows when to end it’s own timer_interval (and has access to the timer_id that does this time_interval triggering) but needs to execute for more than the cycle execution limit, it should be able to end it’s own time interval (at least logically).

Specifically , assuming thread_local’d BCK_COUNTER and BCK_TIMER

async fn start_bck_up()  {
    let bck_counter =  BCK_COUNTER.with(|refcell| *&*refcell.borrow());
    if bck_counter == 2 {
        BCK_TIMER.with( |refcell| {
            let timer_id = &mut *refcell.borrow_mut();
            ic_cdk::timer::clear_timer(*timer_id); 
        });
    }
    BCK_COUNTER.with(move |refcell| {
        refcell.replace(bck_counter + 1);
    });
    ic_cdk::println!("Called start_bck ONCE !!");
}

should ideally work. However this, while it compiles, produces a run-time panic.

[Canister rrkah-fqaaa-aaaaa-aaaaq-cai] Panicked at 'called `Option::unwrap()` on a `None` value', /home/user1/.cargo/registry/src/github.com-1ecc6299db9ec823/ic-cdk-0.6.8/src/timer.rs:230:73
[Canister rrkah-fqaaa-aaaaa-aaaaq-cai] in canister_global_timer: CanisterError: IC0503: Canister rrkah-fqaaa-aaaaa-aaaaq-cai trapped explicitly: Panicked at 'called `Option::unwrap()` on a `None` value', /home/user1/.cargo/registry/src/github.com-1ecc6299db9ec823/ic-cdk-0.6.8/src/timer.rs:230:73

My workaround is to have another timer chase this timer_interval (because this function executes in finite and roughly constant clock time) and the clear it from that timer function.

async fn stop_bck_up() {

    let bck_counter = BCK_COUNTER.with(|refcell|  *&*refcell.borrow());

    if bck_counter > 2 {

        ic_cdk::println!("Now stopping counter at ... {}", bck_counter);
        BCK_TIMER.with( |refcell| {
            let timer_id = &mut *refcell.borrow_mut();
            ic_cdk::timer::clear_timer(*timer_id); 
        });
    }

}

In this line we unconditionally set the swapped out task back, while it might have been removed by the func():

            Task::Repeated { ref mut func, .. } => {
                func();
                TASKS.with(|tasks| *tasks.borrow_mut().get_mut(task_id).unwrap() = task);
            }

Seems, we should check if the task_id is still present in the tasks. We will fix it, mparikh, thanks for reporting.

1 Like

It’s great to enter 2023 with working timers both in Rust and Motoko. Very good job @AdamS, @lwshang, @ggreif, @claudio, @mraszyk and everyone involved!

We’ve captured already some feedback, we’re on it. If you have a suggestion, it’s never too late to share it in this thread.

Happy new year everyone! See you all in 2023!

4 Likes

Regarding recurring timers. What happens if a job takes more time than a delay? Will the 2nd job start executing before the 1st one finishes? Or… Will the 2nd job start executing right after the 1st one finishes? Or… Will the delay still be awaited after the 1st job finishes?

In short, “normal” async rules apply. Other executions might start only in the await points of the 1st job.

If the 1st job is a long execution (i.e. continuous execution of WASM instructions longer than one round, no awaits), no 2nd job (or any other execution) might start until the long execution is finished.

If the 1st job awaits for a call to complete, any other execution might start at this point, including timers, inter-canister calls or ingress messages.

2 Likes

Hoi everyone,
It seems we have quite some differences across Motoko and Rust CDKs.

Motoko CDK provides timers feature opt-out flag -no-timer and the following API:

  • setTimer : (Duration, () -> async ()) -> TimerId
  • recurringTimer : (Duration, () -> async ()) -> TimerId
  • cancelTimer : TimerId -> ()

In Rust CDK the timers feature should be explicitly enabled and the following API is implemented:

  • set_timer(delay: Duration, func: FnOnce()) -> TimerId
  • set_timer_interval(interval: Duration, func: FnMut()) -> TimerId
  • clear_timer(id: TimerId)

What do you think guys of unifying somehow the flags and the API @ggreif @claudio @AdamS @lwshang

I am getting this same error. Please, can anyone help?

First thing to make sure is that you have moc 0.7.5 installed, and dfx does access it.

I used prefix DFX_MOC_PATH="$(vessel bin)/moc" before command like:

DFX_MOC_PATH="$(vessel bin)/moc" dfx build <canister_name>

2 Likes

is there some tutorial or reading on moc? I kinda found out where it is and how to use it but not really. And how to upgrade it is as well?

I think MOC used in dfx gets upgraded with dfx upgrade automatically, and dfx 0.12.1 do not use moc 0.7.5. So to use specific version, you can specify compiler version in vessel bin (vessel.dhall), and then use it for dfx build, using below command :

DFX_MOC_PATH="$(vessel bin)/moc" dfx build <canister_name>

Hopefully, the new dfx release will be out soon, so we won’t need to hassle with the vessel

1 Like

I get this