Introduction
This post outlines a solution for sleeping in update calls, i.e., allowing an update call to await a condition (to become satisfied) without spinning. A common use case looks as follows:
- a periodic timer (e.g., every 10s) makes a (canister http) call to fetch some data in the background;
- an update call must await the data fetched by the periodic timer before proceeding;
or:
- an update call contains a critical section guarded by a lock and must await the lock to be acquired before proceeding.
Currently, awaiting a condition can be achieved by making inter-canister calls in a (tight) loop until the condition is satisfied which results in excessive compute and cycles consumption. Hence, we propose a solution to avoid that ābusyā waiting approach by following a āone-timeā observer pattern implemented by the management canister.
We appreciate your feedback on the proposed API to make sure the API can cover as many use cases as possible.
New management canister endpoints
Management canister endpoint: observe
A new management canister endpoint observe
is introduced to await a condition.
- This endpoint can only be called by controllers of the given canister or the canister itself.
- This endpoint can only be called by bounded-wait calls.
- The call returns upon a matching call to the
notify
endpoint (see below). - Any cycles attached explicitly to this call are refunded.
Input
record {
canister_id : principal;
event_id : nat64;
};
Output
()
Management canister endpoint: notify
A new management canister endpoint notify
is introduced to notify the observers, i.e., to make the calls to the endpoint observe
return.
- This endpoint can only be called by controllers of the given canister or the canister itself.
- The call returns immediately. The reply contains the number of observers that are notified, i.e., the number of pending calls to the endpoint
observe
with a matchingcanister_id
andevent_id
. - Any cycles attached explicitly to this call are refunded.
Input
record {
canister_id : principal;
event_id : nat64;
};
Output
record {
num_observers: nat64;
};
Examples
The new management canister endpoints observe
and notify
described in the previous section can be used as follows:
- a periodic timer (e.g., every 10s) makes a (canister http) call to fetch some data in the background and calls
notify
for the same canister and a fixedevent_id
every time new data are fetched; - an update call must await the data fetched by the periodic timer before proceeding so the update call calls
observe
for the same canister and the fixedevent_id
.
To implement awaiting a lock, the new endpoints can be used as follows:
- when a lock is being acquired, then
- either the lock is acquired immediately or
- a fresh ticket is generated and stored in the state (of the lock) and the canister calls
observe
for the same canister and the generated ticket (event_id
);
- when a lock is being released, then
- if there are pending tickets stored in the state of the lock, then one is chosen (e.g., FIFO), the lock stays acquired, and the canister calls
notify
for the same canister and the chosen ticket (event_id
) ā waking up a single observer identified by the ticket that now holds the lock; - if thereās no pending ticket stored in the state of the lock, then the lock is released.
- if there are pending tickets stored in the state of the lock, then one is chosen (e.g., FIFO), the lock stays acquired, and the canister calls
This locking algorithm is implemented by the following Rust code (omitting error handling of conditions such as out of cycles or memory for the sake of simplicity):
use candid::Principal;
use ic_cdk::api::canister_self;
use ic_cdk::update;
use std::cell::RefCell;
use std::collections::VecDeque;
async fn observe(_canister_id: Principal, _event_id: u64) {
todo!() // new management canister endpoint
}
fn notify(_canister_id: Principal, _event_id: u64) {
todo!() // new management canister endpoint
}
thread_local! {
static LOCK: RefCell<Lock> = RefCell::new(Lock::default());
static NEXT_TICKET: RefCell<u64> = RefCell::new(0);
}
#[derive(Default)]
struct Lock {
acquired: bool,
tickets: VecDeque<u64>,
}
impl Lock {
fn acquire(&mut self) -> Result<(), u64> {
if self.acquired {
let ticket = NEXT_TICKET.with(|t| {
let ticket = *t.borrow();
*t.borrow_mut() = ticket + 1;
ticket
});
self.tickets.push_back(ticket);
Err(ticket)
} else {
self.acquired = true;
Ok(())
}
}
fn release(&mut self) {
self.acquired = false;
if let Some(ticket) = self.tickets.pop_front() {
self.acquired = true;
notify(canister_self(), ticket);
}
}
}
#[update]
async fn critical_section() {
if let Err(ticket) = LOCK.with(|l| l.borrow_mut().acquire()) {
// the lock implementation returns a fresh ticket
// if the lock can't be acquired atomically
observe(canister_self(), ticket).await
}
// critical section
// ...
// end of critical section
LOCK.with(|l| l.borrow_mut().release());
}