Can we somehow achieve or build a canister with a interface bahaving like a classic web server

I agree these are major problems. I’ve reached out to the team separately, hopefully we can get these capabilities added.

I don’t see why the static asset capabilities are being so narrowly defined right now. IMO webpack should have no part in the native functionality of the SDK. We need the ability to retrieve arbitrary static assets based on http paths, without the client having to know anything about ICP. This should all work with native simple http requests.

A number of capabilities are currently missing as I see it:

  • ES/JS modules are impossible to use natively (only possible to use them with a bundler. Bundlers are not using ES modules under the hood, they compile them away)
  • Service worker issues like you mentioned (I’ll take your word for it)
  • RSS feeds, won’t be able to just do an http grab of files off of ICP
  • Building any type of frontend that isn’t loaded from a JS file directly. What if I want to have an html file as my entrypoint? Or a txt file?
  • http2/3 multiplexing


Agree with all the points. Especially the bit about having webpack in the sdk.

Also can we have the sdk as an npm package please? Or at least a wrapper. Would make it easier to create something like NextJS but for dfinity :wink:


Any news on this?

I’m trying to write an RSS server for a podcast feed. I need to return xml generated from the data in my canister. I’m new to webpack, but afaik I’ll never be able to return a simple xml file that can be parsed by a podcast app; it’ll always be surrounded by that initial loading screen and whatever bootstrap scaffolding html there is.

1 Like

I believe that the answer to this is that it is work in progress, about 80% complete. I can’t offer a release date and I’m not working on it myself so I cannot make a good projection for when it will be released but we do see this feature as highly desirable.


Dudes, you just caused a 100 message thread internally and a demo. Hopefully the author of the demo can post some details.


Guess now I have to…

Yesterday I created a small proof of concept of how something like this could look like to developers. This is in no way official, so treat this as built by some rando in the community…

The very small program at implements an HTTP-to-IC bridge, encapsulating HTTP requests as Candid data, and sending it to the canister id mentioned in the hostname. This means you can go to and the full response (HTML, content-type, cookie headers) are under control of the canister.

As a demo, I wrote a canister that implements a simple Telegram bot, you can play with it at and see the code at

Currently, I am hosting this service myself, outside of the IC infrastructure (actually on Amazon Lambda…), but it would be feasible (but not necessary, as I have just proven) that a service similar to this could eventually be provided as part of the Internet Computer platform, with a more official domain, maybe with a name registration service…

I agree that this feature is great to onboard developers, but it also its its problems: E.g. you can initiate HTTP requests from the canister. And worse, you throw out more or less all the amazing security guarantees of the Internet Computer (e.g. responses are tamperproof, user identities are verified by the system). So yes, things are moving … but let’s get the platform properly live first!


This is awesome, thank you nomeata!

I was in the process of building something similar so I can turn my canister’s data into a parseable rss feed.

Are we invited to use https://<canister_id> It would be handy as I build out and test my app, while we wait for a more formal/secure solution.


@nomeata when I run ic-http-lambda locally, the response tells me Use https://<cid>!. That won’t work for locally-running canisters, right? What’s the url when running locally?

I assume the address is supposed to be https://<cid>, with a ‘.’ between <cid> and ic, but of course neither work for me and my local canister.

I’ve tried <cid>.localhost:7878, <cid>.ic.localhost:7878 and <cid>ic.localhost:7878 (7878 is the port spit out when I run cargo run from the ic-http-lambda folder). All of those tell me to use the address.

1 Like

Sure, you can use that for experiments, but don’t complain if it breaks :slight_smile:

For local use, you have to

  • Cange let url = ""; (if you also want to use a local replica)
  • Change the line .and_then(|h| h.strip_suffix("").map(|x| x.to_owned())) to your domain (which for local development p
  • or, if you test just one caniser, just hardcode
    let cid = ic_types::Principal::from_text("your-canister-id".to_string()).unwrap()

PRs against to make local use more convenient (e.g. a flag --always-canister <cid>, a flag --endpoint <url>…) are welcome.


I submitted a PR that adds local replica support. I’ve been using it and it’s working for my needs.

I also was able to convert your ic-telegram-bot’s requst/response code into motoko so I can write my server in motoko. Works great, but took some finagling as I tried to wrap my head around the Rust code. Maybe someone can save some time by checking out my code, and maybe someone has some input as to how it can be improved.

Again, thanks @nomeata for your work on this! You’ve made it possible for me to begin work on my podcast host.


Cool stuff!

Instead of body: [Nat8]; you can use body : Blob; they have compatible types in Candid world, but Blob is more efficient in Motoko (but then, we still don’t have all the operations on Blob that you might expect, so maybe array is better for now.)

@nomeata I’ve been working with the ic-http-bridge locally and it’s working great. I did have to increase the timeout in the waiter because update calls take longer than 5 seconds to complete. Unfortunately all my requests have to be upgraded to update requests even if they don’t modify state (see this thread)

Because I need to increase the timeout, I can’t use your implementation at I just get timeouts if I try that, since 5s is not enough. So I’m trying to release the modified ic-http-lambda code as my own function on aws lambda. I’ve done a little with aws, but never a lambda function and am getting a little tripped up trying to deploy it.

When setting up the lambda function, I don’t see Rust as an option when choosing the runtime language. I do see “Provide your own bootstrap on Amazon Linux 2”, which I assume is what you must have done to get this to run.

I noticed this runtime. Is that what you used? If so, did you use the AWS CLI or the Serverless framework?

I also noticed this in the ic-http-lambda readme: “To build for Amazon lambda, and hence musl, this is using a patched agent-rs without an openssl dependency for now.”

Will I need to make changes to (patch?) the runtime code (agent-rs?) to get it to work the way you did?

Sorry if these questions seem vague, but that’s that state I’m at right now in getting this to work. I would greatly appreciate your help in getting my own version of ic-http-lambda up and running!

At Fleek we’re trying to build a solution that allows you to surface frontend canisters like any other web app. You can have basic CRA, add these files: dfx.json · GitHub and simply dfx deploy --network ic. After it’s deployed you can access website on URL like

Here’s how it works behind: Initial request hits our server, that returns simple bootstrap script and installs a service worker to your browser. After service worker is ready, canister assets are fetched directly from IC gateway (thanks to SW there’s no proxy between your browser and IC gateway). Also it’s “bot-friendly” (when SW is not available, like search engines, link previews, …), these kind of requests are proxied to IC.


Because I need to increase the timeout, I can’t use your implementation at

Which timeout do you need? I can change my setup if you want. But maybe better to let you host your own :slight_smile:

I do see “Provide your own bootstrap on Amazon Linux 2”, which I assume is what you must have done to get this to run.

I think so, yes. It says “Custom runtime on Amazon Linux 2” here. (Not an AWS expert myself.)

Will I need to make changes to (patch?) the runtime code (agent-rs?) to get it to work the way you did?

Nope, if you look at my Cargo.toml you’ll see that it pins the patched version of the agent:

ic-agent = { git = "", branch = "joachim/musl-hacks" }
ic-types = { git = "", branch = "joachim/musl-hacks" }

@nomeata I tried putting together a PR for the increased timeout, but I can’t get ic-http-bridge to run locally anymore, even without the change. When I try to make a request to, it causes the local dfx server to crash, which then restarts with a new port number, and ic-http-bridge respond with Could not reach the server presumably because it’s looking on the old port. Relaunching ic-http-bridge with dfx’s new port just repeats the cycle and I never get a response.

This is the crash reported by the process started with dfx start:
thread 'Http Handler' panicked at 'Opening old round db failed 493301', src/cow_state/

In any case, when I was able to get this running, the timeout I needed to increase was on line 164:

let result = if result.upgrade {
    // Re-do the request as an update call
    let waiter = delay::Delay::builder()
        .timeout(std::time::Duration::from_secs(90))    // <-- increased from 5 to 90

90 seconds is definitely more than I need; I was just using that for testing. I recall responses coming back in around 30-40 seconds.

1 Like

EDIT: did you try using the dfx start --clean command yet?

1 Like

Which command? I’m not seeing --clean as a flag on any of the dfx commands. In the past, I’ve cleaned by deleting the .dfx folder, but doing that and then rebuilding results in the same cid for my canisters, so there must be something else that’s not getting cleaned up.

edit: Not sure how I missed dfx start ~~clean but things are working now
edit edit: those tildes are supposed to be dashes, but editing a post on this forum causes a 403 error if it includes dashes…

1 Like

Can we implement Telegram bot all by canisters in the future?

Sure, why not? (Assuming you use a HTTP bridge like the one I built, or eventually building on something similar that makes it into the offical offering.)