PocketIC/PicJS Server busy error

In on of my recent PocketIC CI runs, I received the following error.

Server busy

      at intervalMs (../../node_modules/@hadronous/pic/src/http2-client.ts:156:19)
      at Timeout.runPoll [as _onTimeout] (../../node_modules/@hadronous/pic/src/util/poll.ts:17:24)

Any reason why this might be happening/the case?

Update - ok I see now where this is happening in the logs (but not yet where in my code). Adding some additional context in case others have this issue.

ERROR pocket_ic_server::state_api::state: The instance is deleted immediately after an operation. This is a bug!

Then later on:

PocketIC server encountered an error UpdateError { message: "Instance was deleted" }

      at intervalMs (../../node_modules/@hadronous/pic/src/http2-client.ts:144:19)
      at Timeout.runPoll [as _onTimeout] (../../node_modules/@hadronous/pic/src/util/poll.ts:17:24)

And then farther on down in the tests I get the server busy message.

Is the commit from @mraszyk mentioned here included in the latest version of pic-js?

in the latest version of pic-js

not yet: it requires version 7.0.0 of the server

Update - looks like we solved this for the time being by removing our previous maxWorkers setting of 5 from our jest (test) config file.

According to the jest docs,

maxWorkers: Specifies the maximum number of workers the worker-pool will spawn for running tests. In single run mode, this defaults to the number of the cores available on your machine minus one for the main thread. In watch mode, this defaults to half of the available cores on your machine to ensure Jest is unobtrusive and does not grind your machine to a halt. It may be useful to adjust this in resource limited environments like CIs but the defaults should be adequate for most use-cases.

Note that you can specify the maxWorkers value in different ways:

  • Number: maxWorkers: 4 (Use a maximum of 4 workers)
  • Percentage: maxWorkers: "50%" (Use 50% of available CPU cores)
  • String: maxWorkers: "2" (Same as using a number)

I’m thinking of trying out a percentage configuration to see what the sweet spot is for PocketIC on in different environments. We obviously want the tests to run as quickly as possible, so the maximum parallelization would be preferred, but we do need to create a new instance of pocketIC for each test file (suite) and then tear it down at the end to keep the test contexts separate.

@mraszyk do you have any suggestions/intuitions based on your in-depth knowledge of PocketIC
of the optimal parallelization setting that we might want to use?

The PocketIC library (pic-js) should deal with the busy case by retrying instead of failing: see, e.g., how it’s done in the Rust library. Then you don’t need to fine tune your parallelism which is inevitably gonna introduce flakiness into your tests.

looks like we solved this for the time being by removing our previous maxWorkers setting of 5 from our jest (test) config file.

GitHub runners have 4 CPUs available. Forcing more workers than the number of CPUs that you have available is not optimal for any Jest project.

In single run mode, this defaults to the number of the cores available on your machine minus one for the main thread.

Note that Jest already uses an optimal default configuration. maxWorkers is generally used to limit the number of workers, not increase them.

1 Like

The PocketIC library (pic-js) should deal with the busy case by retrying instead of failing.

The library already handles this, but what I believe is happening here is that Jest itself times out and runs the afterEach / afterAll hooks which delete the instance. Then the still-running operations encounter the deleted instance and throw this error.

From the Jest docs (and my above message), the optimal setting for maxWorkers is the default which is the number of CPUs minus one (for the main thread). If maxWorkers is set to 5, then it’s the number CPUs plus one, so 2 more workers than the optimal setting. This means that some workers will not have a dedicated CPU and so may spend a lot of time idling and eventually cause Jest itself to timeout.

1 Like