🧪 Announcing PicJS: TypeScript/JavaScript support for PocketIC!

Hey @NathanosDev :wave:

Just gave pic with GitHub CI another try and am running into a different fetch failed issue. This time it hangs for awhile at the PIC server started output line while tests are running

Screenshot 2024-11-24 at 13.46.12

and then I get a bunch of these fetch failed error outputs all at once (test outputs)

Locally tests run great, it’s moreso in GitHub CI where everything starts to break down. I’m seeing these same fetch failure for every canister call.

dfx 0.24.0
pic-js version: 0.10.0-b0

replica version: I don’t remember, does this matter? :sweat_smile:

Update:

Success :tada: I was able to get this work by switching the CI runner from ubuntu-latest to macos-latest, matching the OS I’m currently developing on.

In case anyone’s interested, this is what my setup looks like

name: Pic Integration Tests

on: pull_request

jobs:
  tests:
    runs-on: macos-latest
    steps:
      # Caching
      - uses: actions/checkout@v3
      - name: Cache wasmtime
        id: cacheWasmtimeOSX
        uses: actions/cache@v3
        env:
          cache-name: cache-wasmtime
        with:
          #wasmtime should be stored in /home/runner/bin/wasmtime
          path: /home/runner/bin/wasmtime
          key: ${{ runner.os }}-build-${{ env.cache-name }}}}

      - uses: actions/checkout@v3
      - name: Cache npm modules
        id: cacheNpm
        uses: actions/cache@v3
        env:
          cache-name: cache-npm
        with:
          path: |
            **/node_modules
          key: ${{ runner.os }}-node-modules-${{ hashFiles('**/package-lock.json') }}
      # should hit cache
      - uses: actions/setup-node@v3
        with:
          node-version: 18
          cache: "npm"

      # Installations (should hit cache)
      - name: Install wasmtime
        if: steps.cacheWasmtime.outputs.cache-hit != 'true'
        run: |
          mkdir -p $HOME/bin
          echo "$HOME/bin" >> $GITHUB_PATH
          curl -L -O https://github.com/bytecodealliance/wasmtime/releases/download/v0.18.0/wasmtime-v0.18.0-x86_64-macos.tar.xz
          tar xf wasmtime-v0.18.0-x86_64-macos.tar.xz
          cp wasmtime-v0.18.0-x86_64-macos/wasmtime $HOME/bin/wasmtime

      - name: Install Node modules
        if: steps.cacheNpm.outputs.cache-hit != 'true'
        run: npm i --legacy-peer-deps

      - name: Install dfx
        uses: dfinity/setup-dfx@main
        with:
          dfx-version: 0.24.0

      - name: Install vessel
        uses: aviate-labs/setup-dfx@v0.3.2
        with:
          vessel-version: 0.7.0

      # unzips the nns state tarball to the pic directory
      - name: unpack-pic
        run: npm run unpack-pic

      - name: generate declarations
        run: npm run declarations

      - name: run pic suites 
        run: npm run test:pic-suites

Turns out macOS minutes aren’t included, nor are they cheap :sweat_smile:

Outside of working and building everything locally inside a VM, any tips that would make is easy to fetch a linux/ubuntu version of the nns state? I tried installing setting this up with docker and ran into a few issues when setting up the nns (from the dfx extension run nns install command).

thread 'tokio-runtime-worker' panicked at rs/pocket_ic_server/src/state_api/state.rs:525:50:
called `Result::unwrap()` on an `Err` value: hyper_util::client::legacy::Error(Connect, ConnectError("tcp connect error", Os { code: 111, kind: ConnectionRefused, message: "Connection refused" }))
Installation of wasm into canister with ID: ryjl3-tyaaa-aaaaa-aaaba-cai failed with: Request failed for http://127.0.0.1:8080/api/v2/canister/ryjl3-tyaaa-aaaaa-aaaba-cai/call: hyper_util::client::legacy::Error(SendRequest, hyper::Error(IncompleteMessage))
Install args: InstallCodeArgs {
  mode: Reinstall
  canister_id: rrkah-fqaaa-aaaaa-aaaaq-cai
  wasm_module: <856420 bytes>
  arg: <440 bytes>
  compute_allocation: None
  memory_allocation: Some("10_737_418_240")
}

thread 'tokio-runtime-worker' panicked at rs/pocket_ic_server/src/state_api/state.rs:525:50:
called `Result::unwrap()` on an `Err` value: hyper_util::client::legacy::Error(Connect, ConnectError("tcp connect error", Os { code: 111, kind: ConnectionRefused, message: "Connection refused" }))
Installation of wasm into canister with ID: rrkah-fqaaa-aaaaa-aaaaq-cai failed with: Request failed for http://127.0.0.1:8080/api/v2/canister/rrkah-fqaaa-aaaaa-aaaaq-cai/call: hyper_util::client::legacy::Error(SendRequest, hyper::Error(IncompleteMessage))
Install args: InstallCodeArgs {
  mode: Reinstall
  canister_id: rwlgt-iiaaa-aaaaa-aaaaa-cai
  wasm_module: <900573 bytes>
  arg: <11166 bytes>
  compute_allocation: None
  memory_allocation: Some("4_294_967_296")
}

Maybe the team could attach compressed versions of the nns state (linux/darwin) that we can just download from the DFINITY cdn?

1 Like

I had a few pic.tick() and pic.advanceTime() calls that weren’t being awaited, so adding await before the command fixed the issue.

This has happened before. There are eslint rules that can help avoid that issue, but I’m not sure if there’s any better way to handle that. Maybe I’ll include an eslint setup in the example projects to encourage people to use the same configuration. If anyone has any other ideas of how to avoid this pitfall then I’d love to hear them.

I tried installing setting this up with docker and ran into a few issues when setting up the nns.

Yeah, this unfortunately won’t work with pic-js in it’s current state, but it’s possible to get it to work after it’s brought up to speed with the current server. I haven’t put any time into getting that extension to work, but it’s on my to-do list.

Locally tests run great, it’s moreso in GitHub CI where everything starts to break down. I’m seeing these same fetch failure for every canister call.

How often are you setting up the NNS? We had similar issues on CodeGov when the NNS was setup in a beforeEach, but found it to be more reliable when setting it up in a beforeAll. Then we created a helper function to reinstall the canister under test in a beforeEach to reset the canister’s state. This part may look different or may not even work for different canister’s depending on what the canister does or how it interacts with the NNS.

1 Like

We’re setting up pic in the beforeAll() hook right now.

But we have 20 test suites (files), that at the moment run 170 tests. During each suite, we spin up pic, and then at the end of the suite (in the afterAll hook) we tear it down again.

It seems like pic is running the tests files one at a time and not trying to spin up different instances in parallel (which could be an option for us in terms of improving the speed that our tests run if we want to).

Right now this is the new issue we’re facing in CI (haven’t seen it locally yet). However, my machine is a bit beefier than what GitHub gives us :sweat_smile:

These are the specs for the runner we’re currently using (macos-latest)

This is what we’re using to run all out tests:

npx jest --config packages/pic/jest.config.ts ./packages/pic/**/*.test.ts"

With global setup and teardown files:

// global-setup.ts
import { PocketIcServer } from '@hadronous/pic';

module.exports = async function (): Promise<void> {
  const pic = await PocketIcServer.start({
    showCanisterLogs: true,
    showRuntimeLogs: true,
    
  });
  const url = pic.getUrl();

  console.log(`PIC server started at ${url}`);

  process.env.PIC_URL = url;
  global.__PIC__ = pic;
};
// global-teardown.ts
module.exports = async function () {
  await global.__PIC__.stop();
};
1 Like

Is your project open source by any chance? It sounds like you’re doing things right, but maybe I could take a look through your tests and see if I can potentially spot something else.

Sent you a DM! 20 chars

@NathanosDev Did you make the PicJS repo private?

Yes, sorry about that. The repo is currently being moved to the DFINITY organization on GitHub and will be private until it’s updated to comply with DFINITY’s open source policies. This process is taking much longer than I expected.

2 Likes

Also noticed the docs are unavailable this morning when trying to debug this issue Testing Creation of Canister on a subnet using Cycle Minting Canister with PocketIC - #21 by icme

@NathanosDev Do you know if there’s any special way that I should set up my subnets in pic so that they have a reasonable canister id allocation range?

I’ve reinstated the original repo along with the docs, at least until the DFINITY repo is open sourced.

I don’t know the answer to that question regarding the canister id range, but maybe @mraszyk will know.

2 Likes

To create canisters via CMC: you need to make an update call to CMC first to tell CMC about candidate subnets - see Orbit test setup code for an example.

2 Likes

The error Subnet has surpassed its canister ID allocation is strange though: I’d only expect it if you created over 1M canisters on the same subnet. I also wonder if you can reproduce the behavior when starting from a fresh state instead of mounting a pre-configured state (to eliminate unexpected interference).

2 Likes

Just a heads up, the docsite for the moved repo (now pointing to https://dfinity.github.io/pic-js/) currently returns a 404.

Old one at PicJS | PicJS still works though :slightly_smiling_face:

@NathanosDev
I was just testing out the new @dfinity/pic-js package, and there seems to be a slight bug with pic.setupCanister() related to the new canister creation fee changes.

cycles is one of the options passed to SetupCanisterOptions, but it has the following behavior:

  • If less than the canister creation fee (500 billion) is passed, it fails with the following error
Canister call failed: Canister installation failed with `Canister tqzl2-p7777-77776-aaaaa-cai is out of cycles: please top up the canister with at least 100_415_690_710 additional cycles`.
    Top up the canister with more cycles. See documentation: https://internetcomputer.org/docs/current/references/execution-errors#install-code-not-enough-cycles. Reject code: SysTransient. Error code: CanisterOutOfCycles. Certified: true
  • If more than the canister creation fee (i.e. 600 billion is passed), it creates the canister, but with the amount passed (i.e. 600 billion cycles).

So the issue then is that I’m not able to set up a canister with less than 500 billion cycles, and a lot of my tests start out with the canister having anywhere from 20 billion to 100 billion cycles (after the creation cost is taken into consideration).

This is a behavior that is available on mainnet, such that I can create a canister that starts out with a balance of 100 billion cycles.

Just a heads up, the docsite for the moved repo (now pointing to https://dfinity.github.io/pic-js/) currently returns a 404.

I’m working on fixing this today :smiling_face_with_sunglasses:

This is a behavior that is available on mainnet, such that I can create a canister that starts out with a balance of 100 billion cycles.

This is because pic-js is using the provisional_create_canister_with_cycles API to create the canister, which doesn’t charge the canister creation fee. If you want to fully emulate what you see on mainnet, you could deploy the cycles ledger using the regular approach with pic-js, then interact with the cycles ledger directly to deploy your canisters.

1 Like

The new docs are online now: PicJS | PicJS

2 Likes

PicJS has completed the migration to the DFINITY org and a new release is available, the announcement post with more details is here: :loudspeaker: pic-js has moved to the DFINITY GitHub organization! - Developers / JavaScript - Internet Computer Developer Forum.

3 Likes

Hey Nathan, thanks for the suggestions!

For tests this isn’t as generally easy as a primitive mainly because of the cycle ledger setup in pic (have to pull in the cycle ledger wasms, setup in dfx, etc.), and I’ve also run into strange errors around ā€œsubnet limitationsā€, such as when trying to use a subnet selection parameter for canister creation with the cycles ledger. My guess is that there’s something missing/required cmc ↔ cycle ledger setup that isn’t as easy as just deploying the cycle ledger locally with pic.

I wasn’t able to get this to work, but here are the approaches that I’ve tried:

First, to get cycles onto the cycle ledger, which is either:

  • Mint ICP → then transfer → notify_mint_cycles :cross_mark: (which I can get to mint cycles, but not to send them to the correct account on the cycle ledger for some reason :person_shrugging:)

or

  • Fabricate cycles to a canister, and then deposit them into an account on the CMC :white_check_mark:

And then, calling create canister to create the canister. The main issue with this is that it attempts to create a canister in this way always end ups hitting some strange error.

 Creating canister in subnet rzxjo-5zvly-z2w5l-jbtrg-6vzwe-55wwy-7ed6t-xtux2-ytrlb-znrgh-hae failed with code 1: Could not create canister. Subnet has surpassed its canister ID allocation.{additional_help}

I bounced here, and then ended up trying a different path.

I was curious so I tried hitting the of the management canister directly.

To do this, I set up interacting the management canister with pulled declarations, referencing aaaaa-aa in my setup

    system: await pic.createActor<ManagementService>(
      managementIDLFactory,
      Principal.fromText("aaaaa-aa")
    ),

Then I called the provisional_create_canister_with_cycles API

const canister = await env.system.provisional_create_canister_with_cycles({
    settings: [],
    specified_id: [],
    amount: [100_000_000_000n], // 100 billion cycles
    sender_canister_version: [],
  });

And this seems to work just fine, creating the canister with an outstanding cycle balance (after creation) of 100 billion cycles.

I’ve also tried out pic.createCanister() which also successfully creates the canister with the same starting balance of 100 billion.

  const canister = await pic.createCanister({
    cycles: withCycles,
    controllers,
    ...
  });

So I don’t follow how setupCanister() would fail with a low cycles error, but calling the provisional_create_canister_with_cycles directly successfully creates the canister with the intended cycle balance?

So I only seem run into this low cycle error/issue with pic.setupCanister(), which does both the create and install.

What might be happening here?

1 Like

Using the real CMC is really not fun and I don’t know all the ways it can fail locally. My suggestion would be to use the fake cmc I started here. If there’s any functionality missing, feel free to put together a hacky PR and I’ll glady review it and publish a new version

1 Like

Installing code requires a substantial cycles balance due to its high instruction limit and the fact that cycles for that high instruction limit must be prepaid prior to execution (install code message) with the unused cycles refunded only after execution. So if you set the number of cycles too low, the install code message fails (the error message you shared indeed starts with Canister call failed: Canister installation failed). If you set the number of cycles high enough, then installation succeeds and if the installation was cheap, then almost all cycles are refunded afterwards and you end up with a high cycles balance at the end.

1 Like