Sleeping in update calls

This is a system/management canister call? Is there a particular reason you would like to move this into that layer as opposed to handling this in software?

Is this intended to be a multi-canister and/or multi-subnet setup?

Is there some cost for the time that these are open and waiting?

We’ve discussed sleeping on the motoko side over here a bit: Status of a future in motoko and sleeping

In this case we just specifically wanted to wait “only to the next round” to ensure that - perhaps - some futures settled and we could check them all again. Perhaps a bit of a different use case. In our case, the test would be Time.now() > whenObserveSet. I think we’d have to have a timer running every round to fulfil this which would be its own cycle drain.

What would be nice is if I could get await systemcanister.sleep(#timestamp(now()+1)) that would(except in strange circumstances) get called the next round). Or another option await systemcanister.sleep(#rounds(8)) that would sleep exactly 8 rounds(I’m not sure how this would work if your subnet was bob’d up…maybe it is your next chance after 8 rounds). If you added this to await systemcanister.sleep(#observer(hash(myticket))) that returned after your notify proposition you’d cover most of the scenarios I’ve run across. The problem with JUST observe is that you are reliant on other code to run which you may specifically be trying to avoid. Of course the other problem with timestamp and rounds would be more burden on the replica to sift through these these things each round(and if it is multi-subnet you’ll need to gossip them).

Inverting the problem we’ve done extensive work on a standard and a reference implementation for icrc-72 which seems that it may solve a similar issue, but instead of awaiting in-line(which I admit has it’s niceties including preserved context - at the expense of keeping the context open), asks the developer to adopt the multi-cast pub-sub pattern. Particular care was taken to make sure the pattern worked both for simple single-canister pub-sub and cross subnet super-broadcast(The reference implementation takes care to batch broadcasts across subnets to minimize chatty-ness and cycle consumption) and provides an interface for notified canisters to pay their way with cycles upon notification.

In a single canister instance where you are just waiting for a http request to refresh some data you would use a single line to subscribe to notifications:

ignore await* icrc72_subscriber().subscribe([{
          namespace = "data_refresh"; 
          config = [];
          memo = null;
          listener = #Sync(func <system>(notification: ICRC72SubscriberService.EventNotification) : () {
            let #Map(data) = notification.data else return;
            let #Nat(refreshTimeStamp) = data[0].1 else return;
            if(refreshTimeStamp > lastTimeStamp) doSomething();
            return;
          });

The timer component would register a publication and then publish it:

//set up publication
let result = await* icrc72_publisher().registerPublications([{
      namespace = "data_refresh";
      config = [
        (ICRC72OrchestratorService.CONST.publication.publishers.allowed.list, #Array([#Blob(Principal.toBlob(thisPrincipal))])),
        (ICRC72OrchestratorService.CONST.publication.controllers.list, #Array([#Blob(Principal.toBlob(thisPrincipal))]))
      ];
      memo = null;
    }]);

//set up timer
  private func getData() : async () {
    setLock();
    switch(makeHttpOutcal()){
      case(#Ok(data)){
        latestData := data;
        let timestamp = natnow();
        ignore icrc72_publisher().publish<system>({
           namespace = "data_refresh";
           data = #Map([("timestamp", #Nat(timestamp))]);
           headers = [];
        });
    releaseLock();
  };

  Timer.recurringTimer<system>(#seconds 600, getData); 

The reference implementation we have in motoko allows one to have the publisher, subscriber, broadcaster, and orchestrator all instantiated on one canister…no need to actually expose the functions externally, it should just work. We’d love a rust implementation of the standard and it is currently on our roadmap, but we’re open to volunteers.

1 Like