EVM RPC Canister

I guess I can intercept and proxy one of the established methods and reroute it locally but that seems very hacky.

I’m not sure what you’re trying to do, but with Custom provider you should be able to route your calls to localhost if you want to, e.g.

dfx canister --ic call evm_rpc eth_getTransactionReceipt '(variant {Custom = record {chainId = 31337 : nat64; services = vec { record {url = "https://localhost/something"}; }}}, null, "0x643e670872578855d788f4b9862b1a8cdc88d36ffd477ba832a2e611212c0668")' --with-cycles=10000000000

Is that sufficient for your use-case?

Maybe. I’ll give it a shot, but I seem to remember wanting to set it up as a standard method so I could test governance around which rpcs should be used for a specific item/use case.

Is this just because there is not a good example for motoko or because there is something motoko can’t do? If I set up the params and calc the abi and all of that my self, can I use it?

As another question, there is a bunch of gas stuff in the call, but I’m guessing I don’t have to set that for read operations.

1 Like

I don’t see any reason why you shouldn’t be able to use it from Motoko when you are able to generate the proper JSON-RPC payload.

Yep.

It is possible to call smart contracts from Motoko, but because there currently isn’t a library for Solidity ABI encoding/decoding, we don’t officially have examples for doing so. I’ll see if we can change the wording in the docs to more accurately reflect this.

2 Likes

@gregory-demay

Potentially interesting behavior I found today. I’ve had the RPC working with a local hardhat for a while now with eth_sendRawTransaction. I just converted over to try to use eth_call and I’ve been getting an error. My current set up is that I run pic.js and fulfill the http_outcalls through a mock function basically setting up a proxy. I just take the request body that is found in the pending outcalls collection and relay it to the rpc running on my machine.

The error I’m getting is below:

TRACE_HTTP src/rpc_client/eth_rpc/mod.rs:242 Got response (with 315 bytes): {"jsonrpc":"2.0","id":0,"error":{"code":-32603,"message":"Error: Transaction reverted: function selector was not recognized and there's no fallback nor receive function","data":{"message":"Error: Transaction reverted: function selector was not recognized and there's no fallback nor receive function","data":"0x"}}}

I’ve tracked it down to the fact that the request in the replica has a param of “input” but hardhat wants “data”.

decodedBody {"jsonrpc":"2.0","method":"eth_call","id":0,"params":[{"type":"0x00","to":"0xe7f1725e7734ce288f8367e1bb143e90bb3f0512","value":"0x0","input":"0x6352211e0000000000000000000000000000000000000000000000000000000000000000","chainId":"0x7a69"},"latest"]} 

Alchemy and QuickNode seem to also want data instead of input for eth_call. Is there something internally where you all are handling this? I’m using the custom provider here.

I can keep going with testing because I can intercept the object and change the key in the request, but I’m guessing that if I deployed just to the local replica and let the http_outcall take its natural course that the local hardhat is going to reject it.

In prod I expect to use hardcoded providers, but in the meantime, is it possible that eth_call is broken for custom RPC providers? Or maybe this is configurable in some way?

Hi @skilesare

According to the Ethereum JSON-RPC specification the field should be input (there is no data field in the type GenericTransaction used by eth_call).

The reason for this mess between data and input is that apparently 6 years ago, the field input was introduced for consistency reasons (see #15628) and this has been a source of problems for Ethereum clients since then (see all references in the Github issue, including foundry-rs/foundry#5917). All modern providers should accept input (some projects already use eth_call in prod and AFAIK they didn’t have that particular problem).

Interesting. Well…the hardhat RPC doesn’t so this is going to be an interesting issue for testing.j

It may be worth correcting at the source in hardhat to make sure it fudges to input if it finds data. I’m not sure what it uses under the covers…I’ll investigate while I can…in the mean time, it might be worth a note in the docs that custom providers may not work with hardhat test instances.

FYI for anyone else who ends up here, Hardhad has corrected this in the latest version, so hopefully, no one else will run into it.

2 Likes

Thanks @skilesare for the update! Would it make sense to also close #343 or do you see something still open?