How are awaits handled that don't call an async function?

Say I have the following function:

public func go() : async bool{
    if(Time.now() % 2 ==0){
         let response = await otherCanister.doSomething();
    };
    return true;
};

How does the virtual machine handle this when I call await go()? Does it call it inline half the time and await half the time? Does it always get kicked to the back of the queue? Can other functions be called while I’m waiting for this during the the times that I don’t call the other canister?

I’m concerned about race conditions that might queue a bunch of these up. If I want my canister to be basically locked up while I wait for doSomething, do I need to handle that manually? What would be a strategy for that?

Every await causes a yield, so to say, and other things can happen while that happens, even an “internal” call like await go(). This await go() isn’t much different from await otherCanister.doSomething() in that respect.

1 Like

Does the yield always require waiting at least one consensus round? Or is it smart enough to know that there are no other pending calls and that it is the same canister and pick back up right away? If there are pending calls, would it get knocked back a consensus round at a certain threshold?

It’s not smart enough, and it can’t be (inside the canister) because yields are also commit points, which always correspond to messages on the system.

The system could execute such follow up messages directly, instead of waiting for the next round, which would greatly alleviate this issue. I believe this was worked on at some point, but may not have made it into the release yet. @akhilesh.singhania would know more.

1 Like

Right before launch, we implemented the feature where the scheduler will try to execute multiple iterations of rounds as long as the maximum instructions per round limit has not hit yet. For details see this

2 Likes

So this is live, and applies to self-calls as well? Then indeed using an extra await isn’t that expensive in the end :slight_smile:

1 Like

Hmm, looking at this, it looks like messages sent to self are taking the slow path. I suppose we should address that. I don’t see any good reason why we don’t take the faster path there as well.

Maybe that’s from the time when canisters were using busy loop self-calls, rather than heartbeat, to emulate cron functionality.

Oh yes, that makes sense. In that case, this optimisation would just result a ton of useless calls to the same canister. I wonder if it is a good idea to implement this optimisation then if someone were call themselves in a loop for some other reason.

I vaguely remember that there are some concerns around issuing canister management calls from within a heartbeat context. @dsarlis , @ielashi : does any of this ring a bell for you guys?

From a programming model point of view, having such differences in behavior between calls to yourself and calls to other canisters is very fishy. Usually, uniformity is king. I’m not worried about people doing such self-call-loops, no more than I am worried about people doing self-call-loops-involving-two-canisters.

1 Like

Use case: utility classes that want to encapsulate logic inside a class expose async functions that canister owners can wire up to their actors. The utility functions may call another canister. It seems silly to have to await those and wait for a cycle…especially if the function takes a happy path and doesn’t have to call another canister after all.

Is there room in the language for a function declaration type that says “I might want to call a remote canister in here but I’m not really async myself so wait to queue this until I actually call await, I want the power to call async later."

Maybe this is already in the language?

1 Like

The desire for that has been an endless discussion within the languages team, see maybe Locally abstracting over async calls · Issue #680 · dfinity/motoko · GitHub and Support direct abstraction of code that awaits into functions, without requiring an unnecessary async · Issue #1482 · dfinity/motoko · GitHub although I don’t know if I that captures the full discussion.

1 Like

I don’t personally recall/see any issues with issuing canister management calls within a heartbeat.

These look like they would address the issue. Are they being worked on? Is there an eta or prioritization?

We couldn’t find a design that satisfies all and everyone’s needs so far, and discussion has stalled, so don’t hold your breath.

I actually had an implementation of this that I liked but relied on interpreting await e as may commit and suspend, not must commit and suspend (regardless of whether e is already complete or not), but this was deemed too dangerous as the commit points would be determined dynamically, not statically.

I’m open to suggestions though, as I, too, feel that the inability to efficiently abstract asynchronous code is bad and discourages abstraction.

1 Like

I’ll just add that after trying to build some libraries for broader consumption this liability just makes it really hard. I am NOT a programming language designer and I’m sure I could throw out some bad ideas, but it would be really great to have a solution to this!

As a developer, I’m fine assuming that a state transition has occurred and acting accordingly. Knowing what is going to happen inside of every function you call seems like an very very high bar to try to clear.

2 Likes

I am happy to announce that I just started the merge train on a merge request to merge a block of code where messages that a canister sends to self will also be inducted via the fast path and do not have to wait till next round to execute. Hopefully this will get rolled out into production soon.

4 Likes

Did this get rolled out?

Yes, some time ago. Unless it got disabled since then.

I think you can observe the difference by calling the management canister (get_random_blob) vs a message on self or an async block.

I’m still slightly concerned about the fairness implication of this though.