Probably need to loop @rvanasa in on this one. Using the new beta Iām getting the same issue.
Iām trying to call request
on the evm rpc canister. I have a custom provider of
localProvider = await evm_fixture.actor.registerProvider({
cyclesPerCall: 1000000000n,
credentialPath: "",
hostname: "127.0.0.1:8545",
credentialHeaders: [],
chainId : 31337n,
cyclesPerMessageByte: 1000000n,
});
Iām trying to validate my code against a locally running canister. And Iāve tried two different ways.
Option 1: Iāve set a custom provider to be used
Custom: {
url: "http://127.0.0.1:8545",
headers: [],
}
This one fails fairly straightforwardly with the following error and Iām guessing that maybe there is a miscalculation in the EVMRpc canister for custom providers because Iām loading up the call to the rpc canister with:
Cycles.add<system>(state.cycleSettings.amountPerEthOwnerRequest); //set to 2_500_000_000_000
let result = await rpcActor.request(rpc, json, 6000);
(It is possible Iām loading in the wrong third parameterā¦this response is rarely more than 500 bytes but Iām supplying more for safety.)
expect(received).toEqual(expected) // deep equality
Expected: "0x3e9185d16a6a0857a2db4ddc2c56cea34baee322"
Received: [[{"Err": {"RPC": {"Ethereum": {"HttpOutcallError": {"IcError": {"code": {"CanisterReject": null}, "message": "http_request request sent with 11_540_000 cycles, but 113_562_800 cycles are required."}}}}}}]]
ā¦with a third param of 500 I get
Received: [[{"Err": {"RPC": {"Ethereum": {"HttpOutcallError": {"IcError": {"code": {"CanisterReject": null}, "message": "http_request request sent with 7_540_000 cycles, but 61_562_800 cycles are required."}}}}}}]]
So clearly the second param affects things, but I canāt tweak it to push more cycles on to its HTTP outcall.
Option 2:
I set up a custom provider by authing my self and then adding:
localProvider = await evm_fixture.actor.registerProvider({
cyclesPerCall: 1000000000n,
credentialPath: "",
hostname: "127.0.0.1:8545",
credentialHeaders: [],
chainId : 31337n,
cyclesPerMessageByte: 1000000n,
});
I have some suspicions right off the bat here since the host doesnāt specify HTTP or HTTPs and Iām guessing my local rpc is only HTTP. But the error I get doesnāt make a ton of sense.
Basically I now call with
rpc:{
Provider: 22n
}
I seem to have gotten past my 'not enough cycles thread, but I get about 40 or so messages of:
console.error
PocketIC server encountered an error BadIngressMessage("Failed to answer to ingress 0x68e42066d75f343287588aac8e78d47f833583c4b3c081b42f782e42ee200bba after 100 rounds.")
at intervalMs (node_modules/@hadronous/pic/src/http2-client.ts:144:19)
at async Timeout.runPoll [as _onTimeout] (node_modules/@hadronous/pic/src/util/poll.ts:17:24)
And then the last of these messages has this extra āThis is a bugā tag:
024-10-08T02:57:45.949188Z ERROR pocket_ic_server::state_api::state: The instance is deleted immediately after an operation. This is a bug!
console.error
PocketIC server encountered an error BadIngressMessage("Failed to answer to ingress 0xc04b678382eff65ebdae32793d0720b9d88594a2196806250ee5944c2e084d65 after 100 rounds.")
at intervalMs (node_modules/@hadronous/pic/src/http2-client.ts:144:19)
at async Timeout.runPoll [as _onTimeout] (node_modules/@hadronous/pic/src/util/poll.ts:17:24)
And then I get a bunch of:
console.error
PocketIC server encountered an error UpdateError { message: "Instance was deleted" }
at intervalMs (node_modules/@hadronous/pic/src/http2-client.ts:144:19)
at async Timeout.runPoll [as _onTimeout] (node_modules/@hadronous/pic/src/util/poll.ts:17:24)
Maybe it is trying to make all the requests as if they came from different nodes? Iāve tried setting up the rpc with both a nodes_in_subnet parameter of 1 and 31.
Now one issue that I have here is that I donāt have any kind of special canisters set up like the nns or system subnets. Perhaps that is my issue and I need to pull in and have the state bootstrapped?
Unfortunately, I donāt have any way to make the evm rpc more chatty to know what it is and isnāt doing.
(I did just try to install an https proxy and proxy 443 to the local rpc host(hardhat) but I received all the same errors.