Boundary Node Roadmap

Hi @infu

The boundary nodes will be split into two parts: the API Boundary Nodes that serve only the Internet Computer API endpoints (query, call, read state, status) and the Gateways (currently only the HTTP Gateways) that map between a protocol and the Internet Computer API (e.g., in the case of HTTP Gateways between HTTP(S) and API calls such that browsers can directly access dapps hosted on-chain). Anyone is free to run any gateway. You could do that already now and point your Gateway to the API endpoints on the existing Boundary Nodes. Ultimately, the Boundary Nodes today are VMs and you can start based on them.

1 Like

Very interesting. I think a lot of developers will want to build gateways. @skilesare @borovan Would I be able to use my own hardware and run the new boundary nodes, I suppose using 4th gen Epyc processors so attestation works the same way? We will probably need some libraries to make everything easier.
I assume Dfinity boundary node hardware will be lean and use 16core and if someone wants to add a lot of gateways they should probably get higher spec.

DFINITY will provide a reference implementation/image of the HTTP Gateway. Anyone can take it and run it 1:1 or use it as a starting point and modify it according to their needs.

We are also planning to run HTTP Gateways using TEE such that anyone can check that the gateway is actually running what it is supposed to run.

Since the Boundary Nodes are actually not doing much computation and simply forwarding requests to the IC, the hardware requirements are not that high.

I wonder what the benefits will be for the boundary node runner. Is this something actively been worked upon ?

Will it be a voluntary service or rewarded?

Hi @CoolPineapple

I just tried to update the initial post, but unfortunately, editing is only allowed for a certain and then the post is “frozen”. So, I post my update here:

Here is the updated figure:

The important point is that the API Boundary Nodes are managed by the NNS and expose the Internet Computer API endpoints (query, call, read_state, and status`) as defined in the Interface Specification. Anybody can run an application in front of these API Boundary Nodes. So far, we here at DFINITY have been focusing on HTTP gateways to enable browser access. However, there is room for many different gateways: for example, DNS gateways that allow hosting your DNS records on-chain (see here for more information) or an SMTP/XMPP/MQTT gateway for emails/messaging. This just shows the versatility of the Internet Computer: through gateways (“adapters”), we can connect web2 to web3.


Hey Boundary Node Team,

What’s the latest update? Still targeting year end to roll out the new architecture?


Hey everyone,

Thanks @dfisher for asking for updates. You picked the right moment as I have just been putting together a post with an update on the progress, the latest design choices, and an outlook of what is ahead.


As part of the long term R&D plan for the boundary nodes, which has been adopted by the NNS through motion proposal #35671, we are working on a new boundary node architecture to address decentralization. The new architecture splits today’s boundary nodes into API boundary nodes, which will be fully managed by the NNS like the Replica Nodes, and into HTTP Gateways, which will be operated by different entities.

We decided to first work on the API boundary nodes, while keeping today’s boundary nodes fully operational. Then, we will propose to the NNS to roll out the API boundary nodes. Once they stand the test, we will start rolling out the HTTP gateways and retiring today’s boundary nodes. As part of the roll out of the HTTP gateways, we will also provide a reference implementation that enables anyone in the community to run their own HTTP gateway with minimal effort.


The team is currently fully focused on the API boundary nodes. The work is split into two main topics: (1) replacing nginx with a custom router, which we call ic-boundary, and (2) preparing the registry and the orchestrator to support the API boundary nodes.


As part of the boundary node redesign, we decided to do away with nginx as it is just not made for our use-case. Up until now, we have simply added custom modules to enable our use-cases, which is far from ideal as we are still quite limited in what is supported and testing is complex.

Therefore, we decided to write our custom router, which handles all the API endpoints of the Internet Computer: status, query, call, and read_state. ic-boundary forwards all incoming requests to the right subnet and replica, and applies rate-limits where necessary. In the future, we intend to extend ic-boundary to also perform caching.

The team is close to finishing the first iteration of ic-boundary and we plan to extensively test it within the existing boundary nodes before proposing to deploy under the NNS.

Preparing NNS-management

In order to bring the API boundary nodes under the management of the NNS, we are working on creating the necessary records and tooling in the registry. We are close to being done with that.

To simplify operations, we suggest integrating ic-boundary directly into the existing replica nodes. Like that, a node can be either a Replica Node or an API boundary node. The orchestrator is checking in the registry whether it is assigned to a subnet or turned into an API boundary node, and starts then either the ic-replica or the ic-boundary binary. Following this approach, we can reuse a lot of the existing infrastructure and efforts.


We are planning to run production tests of ic-boundary in the coming month and afterwards start the work of integrating the API boundary nodes into the Internet Computer core. This will be a longer process as it involves many teams and the Node Providers.

We will keep you posted on the progress and are looking forward to your feedback!


Thanks for the update!
I have a couple of questions.

In the case the node is turned into an API boundary node, how’s its reward calculated?
Does this also mean that the same node hardware will be dedicated “only” for running the ic-boundary binary?

Also, in the scenario with multiple independent API Boundary Nodes, how does an HTTP choose the API boundary node to route the requests to?
How can I trust a service worker fetched from a third-party HTTP gateway?


Thanks so much for the update. Is it more realistic to think it will all be complete by the end of 2024?

Hey @ilbert,

You raise some very good points:

In the case the node is turned into an API boundary node, how’s its reward calculated?

At the moment, node rewards are fixed and independent of whether a node is active in a subnet or waiting for its turn as unassigned node. Hence, there are quite some nodes that are unassigned and being rewarded. We propose to reuse several of them as API boundary nodes. That means that initially there will be no extra costs for the IC. In a later stage, we think it would be worthwhile to look at rewarding nodes according to the actual work done (i.e., requests served in the case of API boundary nodes).

Does this also mean that the same node hardware will be dedicated “only” for running the ic-boundary binary?

Yes, initially, we propose to use unassigned nodes as API boundary nodes. It might seem like an overkill in terms of hardware, but as explained above, there are unassigned nodes, which are being rewarded and currently not used and in the future, the API boundary nodes should take on more and more responsibilities that might require “beefier” machines (e.g., running read-only replicas for edge caching). Also, it is important to note that our proposal to integrate ic-boundary into the replica image doesn’t mean that it needs to run on the same hardware as a replica. It just means that the qualification process and maintenance gets simpler, as we don’t have to duplicate work.

Also, in the scenario with multiple independent API Boundary Nodes, how does an HTTP choose the API boundary node to route the requests to?

This is a very good point: As part of the boundary node decentralization, we will provide a client-side library that discovers all API boundary nodes based on a seed and then routes the requests to one of these API boundary nodes.

How can I trust a service worker fetched from a third-party HTTP gateway?

We have a project to use trusted execution for the HTTP gateways. This allows the user to attest first that the HTTP gateway is actually running the software it should be running. However, we can just provide that as a reference implementation. Whether third-parties use that is up to them. For “power-users”, we have the IC HTTP Proxy (currently, it is still a proof-of-concept, but works well), which runs the HTTP gateway protocol locally on your machine and directly connects to the API endpoints of the boundary node today, and to the API boundary nodes in the future. You don’t need to trust anyone with the IC HTTP Proxy as you can verify its source code.


Hey @dfisher, I am not very good at estimating, but end of 2024 seems a bit far far away and I would really hope we finish it much sooner. I see several “workpackages”: creating ic-boundary and testing it, integrating ic-boundary into the existing replica, rolling out API boundary nodes, creating the client-side routing library, and creating the HTTP gateway reference implementation.

We are close to being done with the first to items: creating ic-boundary and testing it, and integrating ic-boundary into the existing replica.

The item that in my opinion will take the longest is rolling out the API boundary nodes. This is however mostly an operational task that involves node providers and the release team. The boundary node engineers can already focus on the other items.


Any thoughts on how to adjust the current NP rewarding scheme accordingly?

Hi @Ozkangurhan
as we are initially proposing to use unassigned nodes as API boundary nodes, the rewards should stay the same. Once, we look into specific hardware for the API boundary nodes, we will start a discussion on the rewarding scheme along the lines of the existing node rewards.

1 Like

Noble goals for the boundary nodes for sure.

I wanted to add a request I made on the forum regarding the VPN Blocking that is happening via the Boundary Nodes. Here is an excerpt of what I wrote on the Forum:

Could DFINITY disable VPN Blocking to the IC controlled domains enforced by the boundary nodes?
There are many good reasons for people to use VPNs, in Canada for example there are several recent restrictions on YouTube, and Social Media so people use VPNs to be able to access censored sources of news. Outside of Canada many use VPNs to avoid censorship regimes.

@sea-snake commented that probably the Boundary Nodes do the blocking to stop spam bots, which I can understand, but in that case a CAPTCHA could be used to verify that the person visiting is a human being and not a bot, or perhaps a login to II. As it stands now nothing works because the page is blocked completely, it does not work and a failed to load message is displayed.

Again, please consider this alternative to avoid blocking of legitimate users of VPNs.


P.S. As you might have guessed I am a concerned Canadian citizen.

1 Like

Hi @josephgranata

I have replied in the original thread. Boundary nodes do not perform any VPN blocking or similar. It would be great if you could work with us to understand what is happening and to ultimately resolve it.

1 Like

Hello everyone,

I am excited to share some recent updates from the Boundary Node team.


We’ve completed the development of our custom router ic-boundary and successfully integrated it into the existing boundary nodes. Currently, our focus is on finalizing everything necessary to put the API boundary nodes under the NNS.


After extensive testing, we have deployed ic-boundary to all production boundary nodes. This means that all API requests (status, query, call, and read_state) are now handled by ic-boundary. The deployment has been smooth, and the robustness of the boundary nodes improved significantly. Previously encountered memory peaks with nginx, leading to occasional out-of-memory issues, have been resolved, resulting in stable memory consumption.

The decision to replace nginx with a custom router appears to be a successful one. This transition has also enabled us to implement highly requested improvements, such as automatic retries in case of errors from the replica. These enhancements will be rolled out in the upcoming days as we apply the final touches and address any remaining small bugs.

Preparing NNS-management

Our primary focus is on preparing for the deployment of the first NNS-managed API boundary nodes. However, there are a few projects we need to complete before achieving this milestone:

  • Orchestrator: Collaborating with the consensus team, we are extending the orchestrator to start ic-boundary when a node becomes an API boundary node and properly shut it down if the node becomes unassigned. Additionally, we are enhancing the orchestrator’s firewall service to apply the right configuration (i.e., open up HTTP(S) to the public) depending on the node type.
  • IPv4-enabled Nodes: To ensure API boundary nodes can serve everyone, they require an IPv4 address. We are working with the node team on a general feature allowing any IC node to have an IPv4 address, making it suitable for API boundary nodes or inclusion in an IPv4-enabled subnet.
  • Discovery Library: We are developing a library to assist IC clients in discovering existing API boundary nodes and routing their traffic to them. The library consists of two parts: discovery and routing. The discovery component involves obtaining a full list of API boundary nodes from the NNS in a certified manner, starting with a seed list. The routing component determines the optimal API boundary node from the list for directing all API requests. The library offers various ways to select a suitable node, such as any healthy node or the one with the lowest latency.


With the successful deployment of ic-boundary in the production boundary nodes, we are approaching the deployment of the first fully NNS-managed API boundary node. There are still a few items to address on our checklist, and we anticipate rolling them out early in 2024.

In the near future, we will begin exploring ic-gateway (working title), the counterpart of ic-boundary, implementing the HTTP gateway protocol and serving as the core of the HTTP gateways.

We will continue to keep you updated on our progress and look forward to hearing your feedback and addressing any questions you may have!


In the near future, we will begin exploring ic-gateway (working title), the counterpart of ic-boundary, implementing the HTTP gateway protocol and serving as the core of the HTTP gateways.

Considering the IC infrastructure, where will the ic-gateway run?
Will it also be controlled by NNS?
Will it be possible to integrate an ic-websocket-gateway inside it?


In the short term, we unfortunately don’t have any way to host an HTTP Gateway in a decentralized way, due to how DNS and SSL certificates work.

So there will be gateways run on Dfinity infrastructure and community members will be free to host their own gateways, either specifically for their own canisters or as a general gateway that can serve any canister.

I think having different types of gateways working together would be awesome though, to make it very easy for someone to setup both an HTTP and WS gateway for their canister.


Is there are way the IC can make a decentralized DNS and CA/SSL?

So, I think something like what we proposed in:

would help in making the community able to spin up new gateways and pay for them directly from their canisters.