Enable canisters to make HTTP(S) requests

Maybe an easier first step is to enable “fire-and-forget” requests where you don’t care about the response, e.g. push notifications. See this related thread.

Also, how does the BTC solve this? Apologies if this has been answered before.


Regarding what we currently have in mind for a first MVP, please see the edit of the topmost post with additional details.

The Bitcoin integration feature solves this by having a more specialized implementation targeted at Bitcoin in which each replica essentially acts like a lightweight Bitcoin client that has some state regarding the Bitcoin network, synchronizes with the Bitcoin network and feeds blocks into consensus.

The consensus integration is quite different, though:

  • Bitcoin blocks are what we call “self-validating payloads” as they can be validated, e.g., based on the hashwork that has been used to mine the block. That is, we do not need to achieve consensus of what each replica sees, but blocks can be handled much like ingress messages with some additional checks regarding the validation.
  • HTTP(S) responses, on the other hand, are non-self-validating payloads, i.e., we need to have agreement between the replicas on what the correct response is. That is, we require a more complex integration with consensus that is at the core of this feature.

Hope this answers your question, otherwise feel free to ask again. :slight_smile:


I don’t see this as a priority and would much rather see services pop up that handle this in a more secure, auditable, and deterministic manner.

This feature would never be useable by enterprise applications since the receipt of the HTTP response would never be guaranteed to reach a consensus. The handling of this seems like a nightmare inside IC code. It seems much more straightforward to have trusted applications that watch the IC, respond to events, and then put the desired response back into the IC.

Further, because most web 2 systems are not written with multiple responses coming in that need to match 100% this seems like a massive foot gun for onboarding devs that want to use this feature and just can’t get it to work because twitter has a timestamp it injects into each response that makes get requests never match.


Yeah agree if we’re just gonna be recreating chainlink.

One option I’ve been looking at is Mina: HTTPS and Snapps: Bridging cryptocurrency and the real world | by Mina Protocol | MinaProtocol | Medium

A SNARK is created that checks the https response signature and whatever is inside the body.
There’s not much details out (supposedly coming early 2022), but from what I gathered its similar to deco. i.e. you need some third party participating in the handshake (their snapp operators) or else it could be forged.

The benefit is you only need 1 request so no consensus needed, and anyone could do the request from their own PC and create a proof that checks private information (like bank account balance) without revealing that info.

1 Like

I think it would be great to have “Chain-Link” on the IC, the consensus mechanism would be more secure. We are really moving to a multi-chain world and this would blow most other blockchains out of the water.


Hi Max!

We have a motion proposal about “General Integration” (Long Term R&D: General Integration (Proposal)) which will comprise, among other topics, integrations with other blockchains and oracle providers. Chainlink integration, also with their upcoming Cross-Chain Interoperability Protocol (CCIP; Cross-Chain Interoperability Protocol (CCIP) | Chainlink), is something of relevance there. Also things like an integration with Polkadot as mentioned in one of the threads should be discussed as one option to integrate with existing integration layers.

1 Like

Let me try to clarify a little what we have in mind. From what you write it seems you are assuming that the canister making an HTTP(S) call will need to deal with all the (slightly) different responses from all replicas of the subnet. However, this will be done by the IC, the canister gets back a single response. The only additional thing the canister needs to care for is to define a transformation function that is applied to the response by each replica to “cut out” varying information, such as timestamps of request-specific ids or just return a single number of interest, e.g., a price of an asset. The IC replicas then apply this transformation on their respective HTTP(S) response and thereby we have the same consolidated / transformed response to account for variable bits and pieces in the response on each replica that will be fed to consensus. A single consolidated response will reach the canister like in a traditional Web 2.0 application. Thus, querying will work really nicely with this approach, for HTTP(S) endpoints that return predictable responses, e.g., various API endpoints. There is no need for external trusted parties and thus additional trust assumptions. It’s essentially a direct integration of oracle-like functionality into the IC protocol stack. This is hard to beat and no other blockchain would have this feature. Also request cost would be much lower than when using oracle services.

Quoting Max:

The more difficult problem is when updating state in external services. Either the service can handle the same state-changing call arriving multiple times, which is easily doable when building such a service with blockchain clients in mind, but current standard services do not account for this. Building a proxy towards external services that handle those multiple update calls is simple, but again introduces a trusted party. This requires definitely further discussion.

External services that pop up would always add additional trust assumptions and thereby make it less secure. In the ideal case, we lose little to no security, but that’s already hard to achieve in practice when introducing those additional trust assumptions.

I think that the external services you mention could coexist in the larger integration landscape and provide additional value for specific integrations. When you speak about enterprise applications, what exactly do you mean? Maybe that’s a specific use case.


Quoting myself above, let me also talk a little bit more about this as it touches the problems that @skilesare and @Tbd have voiced. The functionality of a customizable quorum would be an extension that allows the calling canister to define how many replicas should do the call, e.g., this could be 1, thereby allowing for a trade-off between security on the one side and performance, simplicity, and cycles cost on the other side. When doing a request with 1 replica only, we would give up some security which may be fine for many use cases. Also update calls to external systems would be trivial, with the same reduction in security. It may be good enough for things like many user notification scenarios etc.


Thanks for sharing those links to other integration approaches!
This looks pretty interesting after a quick glance over it. I would be interested in seeing more details on how this is actually done, maybe next year, once they have released it. :slight_smile:
However, one drawback seems to be that there is again the need for additional trusted parties that take part in the TLS handshakes.


Indeed, with the additional extra functionality we have listed, we would get close to what Chainlink can offer. Our approach would likely be massively cheaper per call, more flexible for users, and provide better security than a single oracle query. Doing multiple oracle queries in chainlink then again costs even more and puts additional burden on the canister to do a “poor man’s consensus” on the individual results on the application level. Our proposal would move most of the complexity into the IC protocol stack, at the cost of the implementation effort we have.

I greatly appreciate the discussion and alternatives that are being proposed and think that we will likely need a broader landscape of features and services to achieve all that we want to achieve. We learn more about what our community thinks is the right approach to go forward with with every blog post! Those discussions will help us in prioritizing our efforts toward maximizing the value we create.


It’s essentially a direct integration of oracle-like functionality into the IC protocol stack. This is hard to beat and no other blockchain would have this feature. Also request cost would be much lower than when using oracle services.

I wonder why other blockchains don’t do a direct integration like this. Is it just too much effort to implement? Or are there deeper technical requirements to this that are only met by the IC?


I think Chainlink also has done some work on TLS handshakes but it’s not completely trustless, still needs a consensus engine to run on top.


This TLS handshake-based approaches are definitely worthwhile looking at in more detail!


deco is the one that was acquired by chainlink :smiley: recommend checking out their paper
they also acquired https://www.town-crier.org/ Lol, but it uses secure enclaves

1 Like

Two potential reasons:

  • Thanks to the subnet-based architecture of the IC, only the replicas of one subnet handle an HTTP(S) request, and not all replicas of the blockchain. This is definitely a big advantage in terms of scalability.
  • Our consensus implementation is rather flexible in terms of handling different kinds of payload that require different processing. E.g., ingress messages, Xnet messages, DKG-related and soon also threshold-ECDSA-related messages and HTTP(S) messages. Not sure, whether the consensus layer of every blockchain provides that much flexibility. Any experience here?

That’s the things that come to my mind, but maybe there are more. Any other thoughts on this?


sounds great!!!

We are thinking of how a system API in the Management Canister could look like for a first MVP implementation of the feature. In such first iteration, we are thinking of just allowing for HTTP GET calls. If responses may differ, the canister can provide a method to transform it. This is useful to, e.g., account for timestamps, transaction ids, and the like, as commonly found in API calls. This transformation happens on every replica of the subnet to account for different responses received by each replica. Post transformation, all responses should be the same and can go into consensus.

This MVP would already be pretty useful for lots of applications.

type HttpHeader = record { 0: text; 1: text };

type HttpResponse = record {
  status: nat;
  headers: vec HttpHeader;
  body: blob;

type Error = variant {  

service ic : {
  // A new method to be added to the IC management canister.
  http_request : (record {
    url : text;

    // Support for other methods like post/put would be added here.
    method : variant { get };

    headers: vec HttpHeader;

    body : opt blob;

    transform : opt variant {
      // The name of a wasm method to transform the response.
      // The method signature must be: `(HttpResponse) -> (HttpResponse)`
      // and must be exported by the canister.
      wasm_export: text
  }) -> (variant { Ok : HttpResponse; Err: opt Error });

Feedback welcome!


Not sure about the semantics(and/or correctness) of such transformations. imagine a simple thing like time in replicas.

For example: If the time is stored internally through a system call made within the replica at the time of a update, any transformation that goes through reasoning abou time (i.e. all records within a time range) may report inconsistent results across replicas. Thoughts?

The answer/pattern here is pretty important even for things outside of this specific topic. For example, in my file-system on top of stable memory there is no concept of time as being local to replica.

1 Like

The transform field might be a great application for Candid’s support to reference functions, so if you put in

transform : opt (func (HttpResponse) -> (HttpResponse) query)

you get type checking there and the guarantee (kinda) that the exported function is a query.

This would allow referencing other canister’s query functions, which may seem a bit odd, but is actually nice from a decoupling and composition point of view. And performing a possibly remove query shouldn’t be too hard (or you just fail in that case, if you don’t want to deal with that).

If Candid already had Generic data, then something like this would be even better:

service ic : {
  http_request : <T>(record {
    url : text;
    method : variant { get };
    headers: vec HttpHeader;
    body : opt blob;
    transform : func (HttpResponse) -> (T) query
  }) -> (variant { Ok : T; Err: opt Error });

After all, there is no reason why the transform function needs to return a HttpResponse, and not already something that has been parsed into application logic.

@rossberg can probably comment on these composition and function reference issues.


@nomeata : Thanks for your suggestions on the API, Joachim!

My text did actually not provide enough detail here. So, let me give some more information now: The idea of the transformation function is that is must result in the same response on each replica in order to allow for consensus. The common cases we have in mind is that we know the structure of the response, e.g., a JSON object from an API call and implement the function in a way that “cuts out” some parts of the JSON, e.g, time stamps, ids etc. from the response that may differ between different responses to obtain the transformed response. Such transformations are simple structural transformations based on a priori knowledge of the structure of the response. Particularly, such transformations would not need to reason about time in any way in the replica, as doing so would clearly be very reliable source of non-determinism.

I don’t know whether I am missing a specific class of use cases you have in mind here, but we think that a key use case for the transformation is to remove fields that are known to differ between different responses of the same call. Think of exchange rate APIs as one example of this.

If you want all records within a time range, as you mention in your post, and different responses may contain different records, we would need to provide the time bounds as input to the transformation. Would such additional parameters something we should consider? If so, we would need to somehow provide parameters in a generic manner.