Enable canisters to make HTTP(S) requests

Responses are put into a new HTTP Artifact Pool, are signed by the replica to endorse the response, and the signature is gossiped to all replicas in the subnet. Once a request has support by at least 2/3 of the replicas of the subnet in the view of the current block making replica, it adds this endorsed response to an IC block that is going through Consensus. Because at least 2/3 of the replicas of the subnet have supported the response, it is ensured that the subnet can achieve consensus on it.

Once a request has support by at least 2/3 of the replicas of the subnet in the view of the current block making replica, it adds this endorsed response to an IC block that is going through Consensus.

Did you mean “Once a response has support…”?

A couple of questions inspired by this:

  • Will this integration support HTTP/2?
  • What happens if the HTTP response is large, much larger than 2 MB?
  • Will this integration support compression, i.e. Content-Encoding?
  • Will this integration support streaming large responses via methods like HTTP/1.1 chunked transfer encoding?
  • Will this integration support range requests?

This is exciting stuff! Easy yes vote.


Do you have specific use cases in mind for HTTP requests from query calls?

Technically, it would be completely different to what we are implementing now. In many ways, one could realize HTTP requests for query calls by simply the replica making the HTTP request without any Consensus involvement as the replica is either honest or not and depending on this the query result is trustworthy or less so.
The biggest conceptual problem I can see here is that query calls are synchronous and are done within fractions of a second and making an HTTP request might take a relatively long time, being an asynchronous operation with an entity in the outside world. This might be a large enough mismatch making your request potentially very hard to realize. Also, to make it clear, this is wild thinking in response to your request and currently not planned to be implemented.

Other opinions on HTTP requests for query calls?


Just as there is currently a great want for Inter-Canister Query Calls, there will be a similar desire for http requests from query calls. Both of these features presume a developer wants to request data from somewhere outside of their own canister on the IC or off of the IC, and do it quickly. There will be many uses for this, I can’t foresee them all.

Specifically for me I want to use a canister as a proxy for podcast downloads. Currently podcasts require a server proxy in many cases to download audio files to a web client, because the web clients have CORS restrictions that servers simply ignore. To create an entirely on-chain podcast ecosystem we’ll need canisters to act as proxies (or boundary nodes, but something needs to do this).

A web app served from the IC might want to perform an http get request to a canister that quickly aggregates information from two other canisters and three http endpoints in the legacy world.

If the IC is going to replace traditional cloud then it needs to allow what traditional cloud allows. Coming from Node.js, I feel very strongly that flexible http request functionality is essential to achieving this vision. And I can foresee many asking for this feature in the future.


Yes, indeed I meant to say “Once a response has support…” in the proposal text. Those things happen…
Nice to see that people read our texts so carefully! :slight_smile:


You may not get a response to this unrelated question in this topic. If you re-post it in a better-suited topic I am pretty sure you will quickly get help on this.

1 Like

Good points! We will discuss this internally. The main issue I can see, as already mentioned further above, is the unpredictable time the http requests take and the effects this has for the synchronous query calls. It might clash with the general architecture behind query calls.
Good points to discuss, however. Thanks for the inputs!


As a first MVP we plan to only support HTTP/1.0 or 1.1. It would be quite straightforward to add HTTP/2 support for simple requests.

There must be some size limit on the response, and for now we have set it to 2MB as it is the maximum size for payload in a block.we could allow bigger responses given that the transform function can then reduce them below 2MB but we would still want to have some upper bound so the feature is not abused. For starter, we’ll just have it at 2MB.

For MVP we do not plan to support more than one response per request or content encoding. Decoding compression would be easy to implement at the adapter level so we might do that, but it could then violate size limits. Let me discuss that with the team.

I am not sure what you mean by “range requests”.


Thanks, this makes sense.

When this integration eventually supports query calls as @lastmjs suggested, I foresee a use case where a canister may want to stream and serve a large media file. Why? I’m not entirely sure at the moment, but I can see some wanting it.

By range requests, I mean the case where a server returns an Accept-Ranges header and a client requests portions of a large blob using the Range header.

Totally see how this would be useful, but right now if you’re aggregating data it might be best to set up a cron-job to pre-fetch what you might need and have it ready on your canister(s).

If you’re setting up a download, what about setting up some sort of a WebRTC streaming solution? I haven’t done this myself, but was listening to the OpenChat episode of the Internet Computer Weekly podcast where Matt Grogan talks about using WebRTC to make the chat experience feel instant while processing the update calls simultaneously. It seems like a pretty sweet solution if they’re actually doing that and not just blowing smoke.

1 Like

It would be very problematic to make HTTP requests from query calls. It would require a completely different design than the one we have for this feature. The reason is that in the current design, requests are stored into the replicated state so that both the execution and consensus components can track the request status and deliver back the response. Even with “fire-and-forget” style of requests we would still need a very different mechanism than we currently have in place. That being said, I don’t think it is impossible.

W.r.t HTTP response ranges, that should not be a problem. It is up to the server to support that and the canister can then specify the appropriate header fields in the request in order to use this feature.

1 Like

The download issue I’m referring to is a CORS issue. The browser cannot download audio from servers that do not return the appropriate CORS response headers. Many servers hosting podcasts do not return the headers for some reason. Thus a server proxy is used, which returns the appropriate headers, and streams the audio from the origin server (since servers don’t care about CORS, only browsers), through the proxy, to the browser.

It’s just a workaround for CORS, I don’t see how WebRTC could help with that unfortunately.

1 Like

The motion proposal for this feature has been accepted!


Status update:
We have been working on this feature with a small team for some time now and made quite some progress in all the affected layers: Execution, Consensus, and Networking. Due to a partial overlap of the team with the Bitcoin engineering team, we will likely experience some delays here as Bitcoin has higher priority. A rough estimation for the HTTP feature to be finalized is some time in June.


Thanks for the update! I am extremely excited for this functionality, and especially to be able to build IC-native oracles with Azle. I really think oracles on the IC could be a huge opportunity.


sorry, this feature release yet? I need this to call Apis.

The feature is still being developed and we think it may be finalized some time in June.


so currently, there is no way to call outside apis?

Correct. This is not unique to blockchains. No blockchain can natively call outside APIs (that is why oracle networks exist alongside blockchains).


Thank you for clarifying. I will wait for this feature.

1 Like