PocketIC and PIC.js parallel behavior

for @NathanosDev and @mraszyk: I was writing a test who’s purpose was to call the same function 3 times in the same block and make sure that only one executes. Something like:

const executePromises = [
      env.myCanister.actor.withdraw(),
      env.myCanister.actor.withdraw(),
      env.myCanister.actor.withdraw(),
    ];

    pic.tick(5);

    const results = await Promise.all(executePromises);

    pic.advanceTime((5 * 60 *1000));

    pic.tick(10);

   expect(results).toEqual([{ok : 0},[{err: lock}], [{err:lock};

The withdraw function calls a number of things including making a cross-subnet call from an application subnet to the ICP canister on the system subnet to check a balance. I would expect that when I hit this await in my code that any fulfilment of that would pushed from Round 0 to Round 1 and that the other calls would be scheduled during Round 0.

But what is happening is that all cross-subnet calls from the first call are being fulfilled before the second call ever gets a chance to run. Thus I get an ok on all my calls because the lock I put there is released.

I’m trying to figure out if this is expected behavior or not. If it is unexpected, is it a function of how pic.js does async calls?

Maybe the js await is forcing a set of tick calls until it gets to the end of the await call? If that is the case, is there a way to do this test with Pic.js? Or even with pocket ic?

My understanding is that if these items were on the same subnet then they could execute immediately in the same block, but I’d think that an incoming ingress would get scheduled before any follow up awaits. Maybe PocketIC doesn’t use the same scheduling system as the replica?

I’ve tried to make sure that my topology is correct and that I have 2 application subnets plus the NNS to make sure these calls are crossing boundaries and it seems to be configured correctly.

getTopology [
      {
        id: Principal { _arr: [Uint8Array], _isPrincipal: true },
        type: 'NNS',
        size: undefined,
        canisterRanges: [ [Object] ]
      },
      {
        id: Principal { _arr: [Uint8Array], _isPrincipal: true },
        type: 'Application',
        size: undefined,
        canisterRanges: [ [Object] ]
      },
      {
        id: Principal { _arr: [Uint8Array], _isPrincipal: true },
        type: 'Application',
        size: undefined,
        canisterRanges: [ [Object] ]
      }
    ]

I’m using pic.js latest beta.

1 Like

So what I believe you need here are the submit_ingress_message and await_ingress_message endpoints from the PocketIC server (you can call the former multiple times before calling the latter), which are again, unfortunately not supported by PicJS (trust me it pains me as much as anyone to continue saying this :laughing:).

1 Like

This is currently a blocker for us in being able to migrating all of our slow dfx tests over. A few areas of our app have lock-based behavior, where this functionality is required.

Is it a considerable lift to implement this pic-js?

One follow-up question. Is there any way to force a message with at least one internal await commit points to take more than a single block in Pic/PocketIC without needing to use this ingress API? Speaking from a realistic perspective/scenario if there are cross-subnet calls involved, there’s no way the call would ever execute within the same block - even if the message is a inter-canister call message instead of an ingress (coming from outside the IC) message, right?

Is it a considerable lift to implement this pic-js?

It’s relatively small, I personally haven’t had much time for it, but it’s something I plan to get to as soon as I find some time. I had other features planned first, but since this is needed now I can prioritize that.

Is there any way to force a message with at least one internal await commit points to take more than a single block in Pic/PocketIC without needing to use this ingress API? Speaking from a realistic perspective/scenario if there are cross-subnet calls involved, there’s no way the call would ever execute within the same block - even if the message is a inter-canister call message instead of an ingress (coming from outside the IC) message, right?

I can’t answer this one, but I think @mraszyk will be able to.

1 Like

Is there any way to force a message with at least one internal await commit points to take more than a single block in Pic/PocketIC without needing to use this ingress API?

Note that the ingress API separating message submission and execution doesn’t force a message to take multiple blocks/rounds: when you submit a message making no downstream calls or downstream calls to canisters on the same subnet and later await it, chances are that this message completes within a single block/round.

if there are cross-subnet calls involved, there’s no way the call would ever execute within the same block

that’s correct

even if the message is a inter-canister call message instead of an ingress (coming from outside the IC) message

the origin of a message performing downstream calls to canisters on different subnets doesn’t matter

In our case, we make several downstream calls.

Pseudo code for it essentially looks like this:

// canister state
isLocked : bool = false;
  

someProcessAPI() {
  ...
  if (not locked()) {
    isLocked := true;
    await runProcess(); // async process making several calls to other canisters, including the icp ledger
  ...
};

We’re trying to call someProcessAPI() 3 times, and expect it to run only once. But with pocketIC it’s running 3 times because the async inter-canister runProcess() call is executing synchronously.

If you wish to make parallel inter-canister calls, e.g., in Rust, then you’d need to join_all the Rust CDK call futures. PocketIC cannot make your calls parallel if the canister awaits them in sequence.

What I mean by this is that client is making the calls to the canister in parallel (at the same time)

At the same time

client -> canister api
client -> canister api
client -> canister api

So using Promise.all(...calls) to achieve this effect.

But the canister synchronously executes each call to the API in sequence (as is expected for any ingress or inter-canister call), completing and returning a response before the next one can execute, which is unexpected b/c the API hits canisters (like the ICP ledger) on another subnet.

This is more about the ability to test multiple people hitting an API at the same time and ensuring only one of them succeeds due to the lock.

We’re using Pic/PocketIC (hah, pic-pocket :sweat_smile:) to test this API behavior through the interface of the canister, not the internals of the canister.

Does this make sense, or is there still come confusion around our test use case?

I think what we’re going to have to do is implement submit_ingress_message in pic.js like what they do here for rust:

In effect, you’ll need to call the server endpoint

msgId1 = post update/submit_ingress_message{info}
msgId2 =  post update/submit_ingress_message{info}
msgId2= post  update/submit_ingress_message{info}

result1 = post update/await_ingress_message{msgId1}
result2 = post update/await_ingress_message{msgId1}
result3 = post update/await_ingress_message{msgId1}

I think that should queue up 1,2,3 and execute when you await 1.

It looks like pic.js currently uses execute_ingress_message

Here is the server code that implements the different pathways:

So I think what we need to do is have some kind of shim that intercepts a pic.js actor.XXXXXX and allows the ability to call submit, which adds it to a pending queue that holds the pending request ids, and then put something at the front of tick and/or the other standard calls that checks the queue and awaits the first one before doing the tick or awaiting.(actually maybe that won’t be necessary as along as the item is queued the server first…but we do need a way to get the return value at somepoint.

So I think what we need to do is have some kind of shim that intercepts a pic.js actor.XXXXXX and allows the ability to call submit, which adds it to a pending queue that holds the pending request ids, and then put something at the front of tick and/or the other standard calls that checks the queue and awaits the first one before doing the tick or awaiting.(actually maybe that won’t be necessary as along as the item is queued the server first…but we do need a way to get the return value at somepoint.

I’m confused by this paragraph, but agree with everything above it.

So I was actually working on this last night, almost finished, hopefully I can get it over the line tonight and give you something to try out tomorrow.

The approach I took was adding a new type of Actor called a DeferredActor. It can be created with pic.createDeferredActor(canisterId). Every method on the Actor will return a Promise that when awaited will queue the message and then return a new function (JavaScript black magic). This new function will return a new Promise that when awaited will submit the queued message for processing. So that can look something like this:

const deferredActor = pic.createDeferredActor(canisterId);

const executeSayHello = await deferredActor.say_hello(); // queues the message

// do some other stuff here...

const response = await executeSayHello(); // processes the message

I like this approach because consumers don’t need to worry about directly passing around message Ids for processing, only functions and promises, which I think is a little more ergonomic. What do you think?

2 Likes

I’m not entirely sure if the queue and process later can achieve this or not. I’m implementing the queue and deferred processing in the context of mocking HTTPS outcalls.

@mraszyk can you clarify what happens when a call is submitted to the PocketIC server? I know that this will allow PocketIC to detect any HTTPS Outcalls and give consumers a change to mock the responses to those calls before executing the message. Does it do anything else?

Promise.all(...calls) is the JavaScript equivalent of join_all in Rust so that will make all of those calls in parallel. If that’s still resulting in each call being executed in full (including the calls to other subnets) before processing the next, is there potentially something else missing to get the desired partial execution of one call before processing another?

can you clarify what happens when a call is submitted to the PocketIC server?

some basic validation is performed and then the call is enqueued on the PocketIC server; in particular, the execution doesn’t start yet

I know that this will allow PocketIC to detect any HTTPS Outcalls and give consumers a change to mock the responses to those calls before executing the message.

this only happens when you perform a tick() or execute some other endpoint that triggers round(s) of execution, e.g., for canister HTTP(s) outcalls, you typically need a pair of ticks (one for the user canister to start processing the canister endpoint making a canister HTTP outcall and another one for the management canister to start processing the canister HTTP outcall) before the PocketIC server detects the pending canister HTTP outcall, allowing the test driver to mock its response. I’d recommend looking into the test_canister_http test to see this in practice.

2 Likes

Great, thanks for the clarification.

This looks great. I’m guessing there is also mating in pulling the did and idl. That would be sweet(although there may be some old warms out there that use the hidden function…so maybe an optional param to pass the did?)

Yes it would require all the same parameters as creating a normal Actor would, I left out the idlFactory in my example by accident.

The DeferredActor is implemented in PicJS v0.10.0.

@skilesare you can see example usage of that in the context of HTTPS Outcalls in this example.

@icme hopefully you can also adapt that example to see if it can work for your use case with locks, I haven’t had a chance to try that myself yet.

2 Likes

All good. Maybe something to put in the backlog would be the js library loading wasm and inspecting the idl for you from the metadata.

One of the biggest lifts in my testing has been import hell for all the different idls, init functions, etc.

Maybe that would slow things down too much.

Yeah it would be nice to have support for loading things from dfx.json, primarily for WASM. But for IDL I think it’s trickier to do that because you also need the TypeScript interface that comes from the same location and loading that at runtime won’t work, TypeScript needs to be able to do it at build time.

What you can do there though is create a local NPM package, I used PNPM workspaces to do that on the CodeGov website and then both the IDL and the TypeScript interfaces can be easily imported from anywhere.