Long Term R&D: Integration with the Ethereum Network

Thanks for being interested in Omnic!
Yeah, we will continue to improve the solution and maintain the project.
You can contact me at ccyanxyz@gmail.com to discuss your needs.

4 Likes

Dear community!

We have meanwhile followed up on the post by @ccyanxyz from Rocklabs to see how their products could be used. The outcome is, in short, that a lot that is needed for Phase 1 is there already as part of their ic-web3 library.

Let me first reiterate what the goals of Phase 1 and 2 are. Phase 1 is intended to bring a temporary Ethereum integration API to the community upon which products can be built. Phase 1 realizes an Ethereum integration based on HTTPS outcalls to Ethereum cloud services such as Infura. Phase 2 provides the same (or a very similar) API, but implements it via a native, i.e., trustless, integration with the Ethereum network. Phase 2 may take 4-6 months, so it is important to have something for the community earlier, therefore we all need Phase 1.

What would the API look like?

Current thinking is that the following APIs should be made available:

  • Ethereum JSON RPC API (Raw API): This is the main API the Phase 2 integration will expose. It essentially is an Ethereum JSON RPC API accessible on chain, that looks and feels as if it were offered by an on-chain full Ethereum node. The raw API is exposed by the management canister and implemented as part of the Internet Computer blockchain.
  • Managed API: The managed API is an API that manages the submission of Ethereum transactions. This is not an easy task when doing off chain Ethereum transactions, and harder on chain due to extra latency, e.g., through t-ECDSA and XNet communication with the Ethereum subnet. The managed API helps automate a lot of the tasks behind successfully submitting Ethereum transactions, where its clients only specify a policy on how quickly / costly they want the tx to be submitted. The managed API can either be implemented as part of the management canister API or as a library in user space. A management canister implementation seems much more powerful as it needs to repeatedly interact with Ethereum for a single submission, which is hard to achieve transparently in a library. Recommendation: part of system implementation
  • User-space APIs: The right abstraction for many use cases is something like a web3 or ethers API. This API is provided by a user-space library and is clearly not part of the system-provided API.

Implementing Phase 1

Rocklabs have built the ic-web3 library which builds on the rust-web3 library and enables HTTPS outcalls as transport for communication to Ethereum cloud nodes like Infura. This already does a lot of what was discussed as goals for Phase 1.

Tasks / required changes

  • The ic-web3 library offers a web3 API out of the box. This is a nice addon, but it also needs to expose the JSON RPC API to its clients to address our use case of having the same API as is planned for Phase 2.
  • A managed API for tx submission is likely not part of an initial release, but should be added on top to be on par of what we plan to have for Phase 2. Code for this may be reusable for our Phase 2 implementation and we can shorten the time to market for Phase 2 by working on this already now.
  • The handling of the API key in transit should be improved: Currently, the API key is part of the URL. It is preferred to have it as a header for various reasons related to information security and http protocol best practices.
  • The way the API key is provided to the library should be adapted as currently it is embedded in the source code and thus trivially visible to anyone and up for being used by others, possibly resulting in the quota depletion or DoS. A preferable, yet simple, approach is that an authorized principal can provide the API key via a canister method to the canister that runs the library. Currently, subnet blocks are not public, so this is a viable approach to protect the API key as best as we can. Compromised node providers will still see the API key as it is in the canister’s storage.
  • Building a canister that offers the JSON RPC API to other canisters and uses the library internally. Everyone can deploy their own version of this canister. The canister offers a mechanism to set an API key and Ethereum cloud provider.
  • Optional: Extension of the library and canister to use multiple Ethereum cloud providers to reduce trust assumptions on a single party.
  • Define cycles charging for the different RPC endpoints for the library and canister.
  • For Phase 2: Provide an “IC message” transport implementation of ic-web3 that uses the native implementation of Phase 2 instead of HTTPS outcalls. This can be used as web3 library in canister projects. Explore also the user of the Rust ethers library as another option. Motoko: out of scope for now, t.b.d. separately

Differences between Phase 1 and Phase 2 API

The idea is that the API of Phase 1 and Phase 2 are as similar as possible. However, due to the differences in implementation, there may be some (subtle) differences. A non-exhaustive list of currently-known differences is presented next:

  • Pricing is likely different.
  • Latency is likely different, which can have an effect on tx submission and gas pricing.
  • Under bad subnet conditions, the Phase 1 API may not behave as expected for calls that are in general not deterministic, Phase 2 will offer stronger mechanisms here.
  • There may be a difference on which RPC endpoints are exposed in Phase 1 and 2. This should not affect the majority of use cases, though.

Next steps

  • As the ic-web3 library is a perfect fit for for what we need, we propose to move ahead with the Rocklabs team to adapt the library so that it fulfills Phase 1 requirements and build a canister around it to have a standalone deployment of the API. A bounty / bounties will be available for supporting the work.
  • Rocklabs & the overall community & DFINITY work together to make this happen.

Do you see any requirements we may have missed or do you think the API should be different?
If so, please let us known now before implementation starts.

13 Likes

I think it’s extremely important to consider offering a query version of this API. This will be very beneficial for frontend and backend canister applications that just want to query the Ethereum blockchain. Composite queries and client http queries would then have a rock-solid always-available Ethereum API, which would be fantastic.

Of course for actually submitting transactions the update API would be necessary, and for ensuring the highest level of security when querying the data.

6 Likes

Would this query api be materially different than The Graph?

2 Likes

For phase 1 I can see this as being implemented in a canister with the same API that the management canister will eventually expose. This canister would communicate with the tECDSA subnet and use http outcalls to submit the final signed transaction to Ethereum.

Yes it would be quite different in that it’s just exposing the raw standard Ethereum JSON-RPC API but callable as a query and not an update. That being said, there’s also an official (I believe, Geth exposes it and there’s an EIP if I remember correctly) GraphQL API for Ethereum, and it would also be fantastic to expose that.

The Graph goes far beyond the basic JSON-RPC/GraphQL Ethereum APIs and provides GraphQL APIs to indexed blockchain data. Phase 1/Phase 2 wouldn’t be doing any kind of blockchain indexing, just offering the raw capabilities. We could then build things like The Graph on top quite easily, and probably even more easily with Sudograph.

5 Likes

The GraphQL EIP is not standard yet: EIP-1767: GraphQL interface to Ethereum node data and Geth seems to be the only major client with an implementation. It would be nice, but understandable if left out.

The main point is making the JSON-RPC API queryable.

2 Likes

What is this Ethereum subnet? I think we should consider not having special subnets for these types of operations. IMO having a tECDSA subnet is a disappointing design decision, as essentially the tECDSA subnet is its own blockchain and all traffic must be routed through it. I much prefer the idea of each subnet having its own specific Ethereum functionality, allowing canisters within the subnet to not have to rely on the security of another subnet or any other implications of xnet operations.

3 Likes

Maybe I’m behind on the Bitcoin integration architecture. In early designs I know that each replica was connected to Bitcoin and each subnet would come to consensus on the relevant state of the Bitcoin blockchain. Has this architecture changed? Is there one Bitcoin subnet now? Or does each subnet have its own Bitcoin canister?

2 Likes

Hi @dieter.sommer, Roman asks an interesting question which I also had.

If we can build ckETH using Phase 1, once Phase 2 is complete will there be a migration of the old ckETH to the new and improved ckETH? Or will it somehow replace it? How does this work?

4 Likes

I had expected Ethereum integration to be a light client or full node built client method rather than a JSON RPC API method. Are there any plans to implement these client methods in the future?

1 Like

This is an excellent question and I cannot give you a definitive answer here. In my opinion it is the community that would need to decide whether a chETH canister running in production would be shifted over from the Phase-1 to the Phase-2 implementation. The amount of ETH in such canister is probably a guiding light for this as well as how many cloud providers we have queried for each ETH balance check. Regardless of this, implementation of ckETH and the like can already be started soon, there will be a testnet phase etc., so the proposed approach can in any case help cut time to market for this.

Would like to hear what your opinions on your own questions would be.

This would be nice, however, the use of HTTP outcalls curently prevents the use of queries. The responses of HTTP outcalls inherently need to go through consensus.

The light client API of Ethereum is currently still worked on heavily and also the trust model of a light client is weaker than that of a full client which we are building on in Phase 2. A full Ethereum node built on chain currently exceeds what a single canister can do and thus would be too large of an engineering effort. Thus, both of those approaches are not suitable for a reasonably fast go to market. Note, however, that there is a bounty out regarding a design for an on-chain Ethereum light client. We think that the light client approach might be interesting once the light client APIs are settled and thus would like to have this explored.
@hokosugi Can you share what your concerns are with the approach proposed for Phase 2? It’s completely trustless and probably the most scalable design.

An Ethereum subnet is a subnet that gets enabled w.r.t. Ethereum integration, i.e., runs Ethereum nodes next to IC nodes. This is something we want to have on one or a few subnets, but not on every subnet. The overhead of an Ethereum subnet is quite considerable. I understand your concerns that you now need to trust multiple subnets, but multi-subnet dapps are one of the guiding principles behind the IC in my opinion. But to make it clear, you could enable Ethereum integration on many subnets if you really wanted to.

9 Likes

Thanks for the thoughtful response.

I am pretty sure most people (including myself) would like the Phase 1 ckETH to be migrated over to the Phase 2 ckETH. From a user perspective, it would be great to abstract away as much as possible the transition from Phase 1 to Phase 2.

We want to be able to announce an Eth integration now and then the tech improves over the course of the year. It will be fairly disruptive to have multiple types of ckETH and people will get confused between Phase 1 and Phase 2, particularly newcomers to the IC.

Is it possible to just have all the stuff happen in the background?

8 Likes

Thank you a lot for answering @dieter.sommer :pray:

My opinion will be unpopular, without a doubt.

But to facilitate your reading and save your time, here is my question at the end of my post : “Could you tell us if you see some advantages to bypass phase 1 and start with phase 2, taking the time, and after this only implement CkETH following the same order as for CkBTC ? Or for you, there is none dilemma and you are 100% to start with phase 1 ?”

Now, my opinion :

I would prefer that the phase 2 become the phase 1 and that only once this phase accomplished, CkETH be built upon this native integration. This, in order to follow the same founded logic behind BTC integration and CkBTC development. And currently, I don’t see any good reason to not follow the same order.

Even if it was possible to launch a first CkETH before Phase 2, and after this, « refresh » the CkETH by building it again relying on the phase 2 development, it :

  1. could look disordered and make people wondering « why did they reverse the order in the first place ? » ;
  2. could look inconsistent with BTC/CkBTC order launch ;
  3. would add complexity to the whole process of integration and cost much more resources to Dfinity than if they just had kept the same order that for BTC/CkBTC.

I feel like we put the cart before the horse here.

The BTC integration is a marvel and the CkBTC a promising success built upon it. Dfinity took the time yes, but they did well, because it has been launched once only, following a well justified order : technically and marketingly justified and understandable by everyone, even the non dev public (very important). Given this success, why don’t we just respect the same order even if it takes more time ?

In my humble opinion, here is why, but if I am wrong, please tell me.

People have been waiting for BTC integration for a long time. During that whole time, Dfinity have been heavily pressured and even harassed. We were in a fuddy environment, IC was attacked everywhere by everyone and you were being daily bullied by some hysterical people on Twitter and here. My point is I am afraid that this past is being determining Dfinity about the ETH integration and CkETH timeline and by consequence their order.

Maybe I am wrong, but I feel as if the ETH integration had been very quickly announced after the BTC integration in Twitter messages. People were even surprised by this proximity. We were told « it will be quickly out (in march), we could do it very soon, let’s do it all together. Help us people, you are welcome. Take it, this integration will be yours ».

My whole point is in these announcements : I am afraid that Dfinity have been like traumatized by the « accusations » of being slow, being centralizing the chain (by owning the process of the integration), being prioritizing BTC integration rather than doing other stuff presumed more important things by some fellows, etc.

Personally I have disagreed with each one of these accusations, and we are blessed that Dfinity prioritized the BTC integration and that they took the proper time to sort it out in due time and in due shape (even if I noted the acceleration in the last days, maybe to reassure the market and reward the patience of people). When I read that other devs could have done this while Dfinity should have done other stuff, we are forced to recognize that none dev did and maybe no one would have done it. We will never know.

Today, I see a sort of acceleration to integrate ETH in a way or another, and the fastest possible, so much that CkETH would come firstly and the native integration after only. At least, I would be like @dfisher in favor of finally having a proper CkETH developed with the same logic that CkBTC’s, so by not keeping the first version of CkETH which would be launched before the native integration. But ideally, i would be to start with phase 2 bypassing phase 1 and being patient to do things orderly and properly. How could we justify that a first CkETH would have been launched before being properly launched, « proper » meaning « following the same logic that CkBTC : waiting for the native BTC integration for well founded reasons ».

This is why I would prefer that Dfinity keep the same state of mind that they had for BTC integration, because it is clearly a success. Yes it took time, but now it is done and properly done. I feel uncomfortable with the narrative of the ETH launch, being divided in phases looking like starting with the end and finishing with the beginning. Etc. And I am afraid that it creates confusion, complexity and confusion, notably to the newcomers and the haters.

I would prefer that the native ETH integration come first, and once sorted out, that the CkETH is built upon it.

We have plenty of time before the next BTC halving. The bear market is still here. And anyway, Dfinity should stay the master of the clocks (« Rester maßtre des horloges ») by not being in a hurry, not being dictated the rythme and the order. This integration is too important to be « improvised ». We will have plenty of chains/tokens to integrate to let non Dfinity dev to operate their magic, but the first two ones are too important to improvise whereas a crowned success happened under Dfinity lead.

To summary : this integration should be driven EXACTLY as BTC’s. We should see the enthusiasm about BTC integration and CkETH to do exactly the same thing with ETH. There is no rush. The only rush is to do things properly. As a researcher myself (other domain), the main strength of IC is that its team is constituted of searchers, listening to the time of the research, not the contingencies of the markets. And it is for the best. Some doers will bring another and complementary spirit to IC, but Dfinity must stay as they are, keeping their searchers state of mind, doing things taking the necessary time rather than listening the time that the investors can psychologically handle.

Could you tell us if you see some advantages to start with phase 2, taking the time, and after this implement CkETH ? Or for you, there is none dilemma and you are 100% to start with phase 1 ?

I genuinely probably miss a point, so don’t hesitate to tell for which technical reasons you don’t proceed just like with BTC/CkBTC. But if it is only political or « marketingal », you should be in warrior mode like you did, for our greatest happiness.

6 Likes

To be clear, I think we should do Phase 1 quickly and then Phase 2 later, but for ckETH I think the user shouldn’t notice the difference when the migration happens (if that’s possible).

3 Likes

Would this not be possible in phase 2? Does phase 2 include a full Ethereum archival node? What type of node will phase 2 enable on the IC?

I would think this is very possible. Basically phase 1 has certain security properties and trade-offs, and phase 2 has most likely better security properties and trade-offs. If the community wants to build and maintain ckETH it should be a seamless integration, and behind-the-scenes phase 2 will simply increase the security.

If ckETH is an ICRC1 then the interface shouldn’t ever have to change, assuming the NNS or some other mechanism can upgrade the ckETH canister(s).

That being said, I have issues with the NNS having write access to these canisters, but perhaps that’s another topic for another time.

I think I’ve got the general idea correct here, but @dieter.sommer please correct me if I got anything wrong.

4 Likes

I don’t think that we are targeting an archival node, but probably a full node that prunes transaction history regularly.

On queries: AFAIK, we are currently thinking to use the HTTPS outcalls implementation internally, but each replica will route the request to its co-located Ethereum node. So queries wouldn’t be directly possible, but you could in principle cache (some part of the) ETH state in a canister to make it available as a query (at least on the same subnet for composite queries).

4 Likes

Does phase 1 complete?
Any Update?

We are increasingly living in a cross-chain world. As such, we should aim to not just integrate with the Ethereum network, but design this integration from the ground up to target all possible EVM-compatible networks. It’s not just enough to provide an Ethereum integration, there’s Arbitrum, Polygon, Optimisim, Coinbase’s L2, Binance Smart Chain, etc. Most DeFi activity is taking place beyond the confines of Ethereum. And once the Ethereum integration is complete, the community will naturally ask for further integrations. From this perspective, it would be great to use HTTP outcalls to accommodate as many different networks as possible. Let’s be driven by pragmatic business decisions.

Before we start building this on the IC, we should review the design decisions of other oracle networks, for instance ChainLink. With ChainLink deployments, you set up your own network of nodes, each node runs something similar to a docker container and has access to its own private state, e.g. can privately access its own API keys.

From this perspective, it is easy to see what is missing for both HTTP outcalls and the ETH integration, we need a way to direct different API keys and/or state to different nodes. As a user I should be able to set API key 1 to a subset of nodes, API Key 2 to another subset etc. With this feature cross-chain integrations like Omnic, used for bridging tokens, become secure; without it, I predict we will have major outages and DOS attacks on bridged tokens in the future. Currently, our scheme is not decentralised as a single node provider can DOS most HTTP outcalls that use an API key.

@dieter.sommer @Manu @benji @domwoe

10 Likes