Announcing Two Major Upgrades for HTTPS Outcalls: IPv4 Support + Non-Replicated Calls are now LIVE!

Hello everyone,

We’re thrilled to announce two major extensions for canister HTTPS outcalls that significantly expand the power and flexibility of the Internet Computer. These have been some of the most requested features from the community, and we can’t wait to see what you build with them!

1. Connect to the Entire Web: IPv4 Support is Here! :rocket:

The wait is over! Canisters can now make HTTPS outcalls to services hosted on both IPv6 and IPv4 addresses.

How it Works

The system is designed to be fully automatic. For every outcall, the IC will first attempt a direct connection (ideal for IPv6). If that connection fails for any reason (like the server being IPv4-only), the request is automatically retried through a SOCKS proxy managed by the IC. You don’t need to change anything in your canister code; it just works. You can see more details here.

The TLS session is persisted end-to-end between the node and the destination server. This means the proxy only forwards the encrypted traffic, ensuring it never has access to the plaintext data and also cannot alter the server’s response. This guarantees both confidentiality and integrity.

2. Experimental Non-Replicated Outcalls :test_tube:

With the latest CDK (version 0.19.0-beta.1), you can now specify is_replicated = Some(false) in your http_request call. This ensures that only a single replica (chosen randomly for each request) will execute the request, instead of all of them.

You can check a working example in rust here:

let arg: HttpRequestArgs = HttpRequestArgs {
  url: "https://www.random.org/integers/?num=1&min=1&max=1000000000&col=1&base=10&format=plain".to_string(),
  max_response_bytes: None,
  method: HttpMethod::GET,
  headers: vec![],
  body: None,
  transform: None,
  is_replicated: Some(false),
};
canister_http_outcall(&arg).await

And a Motoko example here:

let http_request : IC.http_request_args = {
  url = url;
  max_response_bytes = null;
  headers = request_headers;
  body = null;
  method = #get;
  transform = null;
  is_replicated = ?false;
};

Why is this a game-changer?

This unlocks several new use cases that were previously difficult or impossible:

  • Access Non-Idempotent Endpoints without triggering duplicate actions: Be able to e.g. request an email gateway to send an email, without it being sent N times, add certain resources with HTTP POST, call out to expensive endpoints etc.
  • Interacting with Fast-Moving Data: Get a timely snapshot of rapidly changing data (e.g., a crypto price feed) without worrying about consensus failures from tiny differences between replica responses.
  • Expensive API Calls: Drastically reduce costs by avoiding request amplification. If an API call is expensive, making it once instead of N times can be a huge saving.

The most important consideration is security: Since only one replica makes the call, the standard consensus guarantees do not apply. You must fully trust the single replica that handles your request, as a malicious or faulty replica could return an incorrect response. Only use this for interactions where you don’t need to trust the response data or can verify it by other means.

Additionally, its cycle price is the same as with fully replicated outcalls, and you could encounter latency issues. We are working on improving on those fronts.

Please Note: This feature is experimental and its interface may change based on feedback.

These new capabilities are now available for testing. We encourage you to upgrade your CDK or Motoko compiler to try them out and share your findings!

What’s Next on the Horizon :eyes:

These features are part of a broader effort to make canister outcalls more powerful, flexible, and affordable. Here’s a glimpse of what we’re focused on next:

  • Cost of Outcalls: We are actively working on refining the pricing model to make all outcalls (especially those with a large max_response_bytes parameter), more cost-effective.
  • Flexible Outcalls: We plan to give you more control over replication, such as the ability to request multiple responses from different replicas and resolve consensus directly in your canister logic.

Help Us Shape the Future: The API for Flexible Outcalls

As we look beyond the initial experimental non-replicated feature, our next goal is to build a truly Flexible Outcall API that gives developers the right balance between security and control. We are currently exploring two main directions and would love your feedback on which approach would be more valuable for your projects!

Path A: Simple & Secure: We could offer a straightforward flexible_http_request where all replicas in the subnet attempt the call, and your canister receives a list of at least 2f+1 responses. This is simple to use and provides a strong security guarantee—you know a majority of the responses come from honest replicas, making it easy to find a trustworthy median for price feeds or other dynamic data.

Path B: Advanced & Customizable: Alternatively, we could provide a more advanced API. This would allow you to specify “attempt the request on n random replicas and return after k responses” (k-of-n), or perhaps even target specific node IDs. This would give you more fine-grained control over the cost, latency, and trust trade-offs for your specific application.

With that context, we have a few key questions for the community:

  1. Looking at the two paths, which one feels more practical for your day-to-day work? Is the ‘Simple & Secure’ approach, where you get 2f+1 responses and a guarantee of an honest majority, enough to solve most of your problems? Or do you have projects that would absolutely require the ‘Advanced & Customizable k-of-n model?
  2. If you’re excited about the ‘Advanced’ path, what’s a real-world problem you’re trying to solve where you’d need that level of k-of-n control? Your examples are the best way for us to know if this is a powerful ‘nice-to-have’ or a true ‘must-have’ for the community.
  3. How do you view the trade-off of targeting specific node_ids? Is the power to choose a specific node (a 100% trusted node for example) worth the added complexity of managing node identities yourself?
41 Likes

But I’m not a rapper…
Check me out, check me out…

1. Boom

.

.

.

.

.

1. Bam

.

.

.

.

.

2. Bop

.

.

.

.

.

3. Badabop boomp

.

.

.

.

.

4. POW


Stay tuned for new Mops packages and package updates → Telegram: View @mops_feed
:person_running:

8 Likes

Thank you. I have been needing the single out POST call for a project which required a lot of workarounds

2 Likes

Well this essentially eliminates the main use case of GitHub - Demali-876/consensus: Consensus deduplication proxy supporting x402 payments. on ICP, I guess I have to pivot to becoming a fully decentralized proxy service.

1 Like

Thanks for this update. This helps a great deal.

1 Like

You have no idea how long I’ve dreamed of this day. :face_holding_back_tears::open_mouth::smiling_face_with_tear::older_person:

Thank you! :raising_hands::man_bowing:

It’s very smart to simply random pick a node. And the Socks proxy, UAU :star_struck:.

I have been onboarding Devs for over a year now (through the Hubs), and the reality was: it all works great on localhost… but as soon as it went to playground or main net, it crashed :melting_face: (if using HTTP Outcalls).

The IPv4 issue was most common, the no consensus on AI APIs was the next most common, and although the POST issue was rarely seen, I knew it prevented easy setup of email notifications to any start up doing a real business on the IC.

In the past several attempts were made, like Http Proxy, Cloudflare Workers, etc. but none of that is as easy as “is replicated = ?false” :open_mouth: It’s so elegant, big congrats :clap:

And now with Caffeine, this missing feature would be even more costly, so it’s great that we finally have it here. Can’t wait to play around with it and send some email notifications directly. :flexed_biceps:

13 Likes

About the 2f+1 and the k-of-n (and node preference), think I prefer the later.

First, I would predict >99% of devs will use the random single node you just published (as 99% of cases a single HTTP response would never be THAT powerful to be economically relevant the effort of corrupting it).

Caring about k-of-n and a preference on node-id(s), it’s a level of optimization, that only very picky business cases will need. So they probably have senior devs, and suspect they would prefer fine tuning to its specific use case, rather than any “simplification”. For being simple, you already have the current solution (is_replicated = true/false).

Maybe the idea of creating a separate method for the Advance mode is important, so it doesn’t make the current params even more complex :roll_eyes::folded_hands:

That’s my 2 cents. Again, thank you so much for bringing such valuable feature to reality. :smiling_face_with_tear::smiling_face_with_tear::smiling_face_with_tear:

3 Likes

King :crown: :joy::joy::joy:

No one will stop us now :winking_face_with_tongue:

3 Likes

Both features are great!

1 Like

I cannot stress how BIG of a feature this is.

This literally makes the IC a full stack solution now.

Everything else on the internet is callable bi-directionally now.

This is by far the best thing that’s landed on the IC in YEARS.

GoodJobGIF

4 Likes

First to the tech team: Congrats! Keep shipping! Awesome work!

To the users: Do not pull with 1 random! If you use the 1 random method, you lose 110% of your blockchainy-ness. This may be fine for many applications, but don’t claim to be web3 or immutable or permissionless. Using 1 random will collapse your app to a centralized application. You become dependent on the outside world with no security guarantees.

1 random will be awesome for triggering some outside process, but I’d avise that id you have data that your application depends on(especially if it is financial in nature) that you think systematically about how to alert a service/dapp/network that can, with concencus, and possibly stake, PUSH the data on to the IC so that you have clear provenance of the data.

If the above does not make sense to you and you are building a web3 app then please do some research on it until it makes sense. I’ll be happy to discuss your idea with you and come up with a plan!

vs

5 Likes

Interesting post…

1 Like

With these two major upgrades now live on the Internet Computer, I’m wondering if Caffeine AI has plans to integrate these new capabilities. The IPv4 support opens up a lot more external services, and the non-replicated calls could significantly improve performance for certain operations.

Are there any timelines for when Caffeine AI might start leveraging these features? Seems like they could unlock some interesting possibilities for external data integration and faster API calls, especially given the discussion about maintaining proper decentralization versus performance trade-offs.

Just curious if this is on the development roadmap or if you’re waiting to see how the community adopts these features first.

1 Like

I love it!

As for the k-of-n configuration option - I don’t have any apps where I think I would use it, but as far as I am concerned it would only take one flagship product on ICP to use it, making user experience better, to make it worth having. I definitely see the option as worth having, also for completeness of the solution, best of both worlds would be having both the simpler and the more advanced k-of-n APIs available.

@skilesare usually when I disagree with you I realize about a year later that you’re right, perhaps also this time.

But I think you’re wrong here! :slight_smile:

I know that the current setup, with multiple requests and consensus “feels blockchainy” - but it isn’t, really. It’s a special setup for allowing off-chain compute to be accepted as truthy enough to be written to the blockchain and having smart contracts act on them. In short, it is an Oracle solution, just like Chainlink, but with the difference that all that is guaranteed here as far as “truthiness” is that all nodes got the same reply - whereas Chainlink has staking-based solutions for ensuring quality and availability SLAs for the providers of the data itself.

You allude to this, suggesting that stake-based Oracle solutions may be required if we go down this path of laxer requirements on https call consensus, but the reality imo is that we need such stake-based Oracles anyway, if we want data reliable enough for “dumb smartcontracts” (dumb in the sense of a golem - feed it bad data and it does something terrible!) to act on.

Consensus on https calls may be better than not having it, but it doesn’t make it “blockchain-secure” - it makes it “oracle-secure”, and only weak, non-stake based oracle-secure at that. And the only thing that becomes “oracle-secure” is the actual transport of the data from the web2 endpoint, nothing about the data quality is oracle-secure.

So elevating this “weak oracle-secure” https outcall consensus, step by step, to:

  • strong “oracle-secure” https outcalls (there is no stake)
  • strong “oracle-secure” data feeds (there are no data quality guarantees)
  • blockchain-secure data (oracle-secure is never quite as secure as blockchain-secure, which is entirely deterministic)

…is a mistake. So yes I agree that if you want reliable data for your smart contracts, you need Chainlink or equivalent. But I disagree that using a random caller instead of consensus for https outcalls does NOT “remove the blockchainyness by 110%” because it was not there in the first place, and the only thing that happens is that your “weak oracle-secure” https outcalls stay weak - perhaps even a little weaker, but in my view not significantly so. Comparing either weak solution to a strong oracle-security solution, with stake and data quality SLAs, is like comparing apples to oranges.

1 Like

@Snassy-icp We’ve been in alignment since 2022. Sometimes you just have to meet the people where they are.:joy:

…as a slight aside, you can argued that with upgradable contracts, the IC can get just as mired in the lack of deterministic replay problem as well. Thus there is 133remoteCall and 133remoteReturn block in icrc 133 to attempt to certify that canisters were called and did return which can give some security to data from 3rd party canisters provided your canister’s code is open source and its upgraded trail and orchestration is also recorded.

5 Likes

I tried httpoutcall before, but IPv4 was an insurmountable obstacle when trying to call the exchange’s API.
Now, it’s finally done.

1 Like

This is cool!
will Non-Replicated Outcalls applied for RPC canister to other chain. It quite slow for some use-case

I doubt it. The RPC canister is meant to be security focus first, or else why go through 3 different RPC providers.

But if your app is not making risky operations (like Loans, Swaps / Transfers), think it will now be easier, faster and cheaper to implement your own single API request, and the update call can resolve faster than if going through RPC canister :man_shrugging:

@mihail.jianu Hi, we modified our canister to use Non-Replicated Outcalls last week, but encountered frequent timeouts. Therefore, I wrote a separate test interface specifically to measure the latency of calling the same RPC interface under two conditions: is_replicated: Some(true) and is_replicated: Some(false). The tests were conducted on two different subnets (one with 34 nodes and another with 13 nodes). After multiple rounds of testing, I found that Non-Replicated Outcalls introduce roughly 10x higher latency on the 34-node subnet, while on the 13-node subnet, the delay is about 2x higher. Is this expected? I thought Non-Replicated Outcalls would have lower latency.

@hsxyl this is a known artifact of the first iteration of the implementation. Technically, we currently wait for the replica that made the outcall to become the blockmaker, so the expected delay on an N-node subnet is around N/2 rounds. I can’t give you a concrete timeline but we are working on improving that by adding the ability to gossip the response, so that anyone can include it in the block.

But even when that is done, I wouldn’t expect the latency of non-replicated calls to be significantly lower on average than for replicated calls, they should be on par.

1 Like