Boundary node http response headers

I believe that would work for the podcasting use case if there is a URL exposed for non-certified uses. .raw has been working well for me so far.

I don’t think the discussion is about how to implement the streaming anymore, it seems clear it’s best done in a much more scalable/secure manner outside of the assets canister.

Unfortunately it looks like we’ll have to wait a while to get this working, so I just want the boundary nodes to not filter out Range headers since my fork of the certified-assets canister has an implementation of partial responses that will work for certain use cases.

You still need to implement partial responses, my solution does this from a canister.

I don’t care where it’s done (except I want the best solution possible) it just needs to be done. I have a working canister implementation that myself and others would benefit from.

Again, I just need Range headers. Just ignore my fork of the assets canister, it’s just there temporarily for whoever wants to use it.

There are certification schemes that support that, using suitable hash functions, and we did consider them back then before we settled on this certification MVP because we needed something simple. But I wouldn’t say it’s impossible, we could extend our protocol here. See, e.g.

We do - that’s the raw.ic0 URL, isn’t it? And there I would have indeed expected headers to be passed through.

1 Like

Thanx for the pointer! I had wondered if rolling hashes or something could do it. We want to have certification checking by default with opt out for via icx-proxy where the opt out would itself be certified. We could pass the headers but currently there is no way to certify the Range response. That said I don’t think there is a security issue, but I need to get it vetted.

1 Like

This sounds like exactly what I need.

HI, @nomeata :slight_smile:
Why is the SHA-256 hash stored for each asset and validated on service-worker?
I’m not expert, but isn’t it enough that the service worker checks the IC-certificates in the headers of asset (to avoid data spoofing) for “certification”?
For example about sha: if I load assets myself with manually generated md5-hash, then hash saved on certified-assets canister, but service-worker validation will fail.

I don’t quite understand the question. The certification needs to build a trust path between something that the user (the service work) knows, namely the IC root key, and the file just loaded. This involves a few steps (root public key → subnet public key → subnet state merkle tree root → subnet state merkle tree entry with certified data from canister → canister merkle tree root → canister merkle tree entry with sha256 of file loaded → file), and one of the steps requires the SHA256 signature of the file. So the canister needs the SHA256. Also see this video for more details:

Is it the case that the default asset canister doesn’t calculate the SHA256 upon upload, but requires the uploader to set it? Then that’s an engineering choice around that canister.

1 Like

Now everything is clear for me, thanks a lot!

Do you have any plans for this in the near future?

I don’t.

BTW, for many applications it might be a good option to serve HTML and JavaScript safely over the certified URL, and then do more complicated stuff (e.g. fetch video chunks certified by a stream signature) from your JS in an application-specific way, and not using the certified HTTP stuff in that case. This way no need to wait for me or DFINITY to find generic solutions (which tend to be harder).


Guys this is a humble opinion on two things that I feel are faces of the same decentralization coin.

  1. _raw
    This is a security hole I wish Dfinity as org had never introduced. The implication of a single replica serving arbitrary data that cannot be checked for integrity is difficult to digest. Also now that _raw is deployed in the wild it’s difficult to roll back such decisions.

  2. http header
    My opinion is that Dfinity should NOT provide guidance/spec out “web content related” canisters. HTTP is an independent spec, what does mean to have a restrictive/additive HTTP spec within IC spec. It’s a slippery slope why stop at webserver specs, lets’s also do mail server specs

I sympathize with @lastmjs, he is being coerced into doing streaming “a” particular way. Having said that I don’t think adding anything to spec or boundary nodes is the right idea.

Instead, let’s work towards some solution where
@lastjms can run his canister, and his boundary nodes and expose any HEADER, any un/secure content he wants to. There is dissonance in what we are asking developers to do. Providing an unsecured _raw path, having an opinion on streaming be done X vs Y way is not an IC concern.

The effort required may be similar.


I agree with your words!

About your suggestion: there are some limitations for boundary node:

  • The deploing own boundary node under custom domain is might be a great solution. But then it will lose the ability to work under domain, especially when custom subdomains are released. Demergence-podcasts will live fine on (for example), but not on potentially
  • This may can affect the audience’s trust (not a fact)
  • The streaming feature needs not only for demergence, it might be useful for others. And the easiest way to get streaming feature for ordinary developer is to use certified_assets canister (simple set "type"="assets" in your dfx.json and lets gooo)

In general, a solution is acceptable. Moreover, there is an info about deploy boundary node

I’ve been thinking about this for the last week and can suggest the next:

  • In theory, you can to develop custom service-worker with range streaming support and register it on your canister manually. It will catch .raw. requests with range headers, will try to collect chunks from canister query methods and will return response with ranges to the client. There is my attempts to do this and it works nearly fine with TODOs for certfied_assets

To whom it may concern: I’ve been working with Daniel to try to figure out this strange safari issue where I can’t get our video to play from our canister. If I run a local version of the icx-proxy and point it at I can get my video to play at http://localhost:3000/-/1/ex?canisterId=r5m5i-tiaaa-aaaaj-acgaq-cai but if I call the same thing at the video does not play.

In both cases, safari stops the request when it sees ‘video’ in the content type and sends a new range request asking for the first two bytes. When I go through a local proxy that range request gets through(It is a header on the request (Range: bytes=0-1). When I go straight to Safari says that it is sending the header, but i’ve manipulated my server to return the actual headers that I’m getting and the range request is not there. So either Safari is lying(Daniel thinks so) or something is stripping off the range request on the way to my canister. I don’t know the topgraphy of where a request actually goes, but I suspect there is a boundary node and a proxy?

Request - through local version of icx-proxy
GET /-/1/ex HTTP/1.1
Accept: */*
Connection: Keep-Alive
Range: bytes=0-1
Host: localhost:3000
Accept-Language: en-US,en;q=0.9
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.0 Safari/605.1.15
Referer: http://localhost:3000/-/1/ex?canisterId=r5m5i-tiaaa-aaaaj-acgaq-cai
Accept-Encoding: identity
X-Playback-Session-Id: 6FFA5EE3-52F6-4DDC-8297-300A4C22A9DF

HTTP/1.1 206 Partial Content
Content-Range: bytes 0-1/46984888
Accept-Ranges: bytes
Content-Type: video/mp4
Content-Length: 2
Date: Thu, 21 Apr 2022 22:58:16 GMT

Request 2 - through
GET /-/1/ex
Range: bytes=0-1
Accept: */*
Accept-Encoding: identity
Connection: Keep-Alive
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.0 Safari/605.1.15
X-Playback-Session-Id: 62FA3A12-FD2A-4C34-839A-B6963DCFC6F1

Actual headers that get to my canister
[("host", ""), 
("x-real-ip", ""), 
("x-forwarded-for", ""), 
("x-forwarded-proto", "https"), ("connection", "close"), 
("user-agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.0 Safari/605.1.15"), ("accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"),
 ("accept-language", "en-US,en;q=0.9"), 
("accept-encoding", "gzip, deflate, be")]

Access-Control-Allow-Credentials: true
Content-Type: video/mp4
Access-Control-Allow-Methods: GET, POST, HEAD, OPTIONS
Access-Control-Expose-Headers: Content-Length,Content-Range
Access-Control-Allow-Origin: *
Date: Thu, 21 Apr 2022 22:56:06 GMT
Access-Control-Allow-Headers: DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Cookie
Server: nginx/1.21.3
x-cache-status: MISS

This thread confirms my suspicions.

Looks like the short term solution is to set up a server running icx-proxy that can server your requests. So @lastmjs you could set up and it would work.

As far as certified assets go, range requests allow you to return the results you want vs. what was requested, so if you certified 1MB chunks you could always return at least that chunk. If it requests something in the middle of a chunk, rewind to the earliest certified starting point and return the chunk. maybe the chunks need to be smaller, but since safari requests them in order and the latency is pretty bad it can take along time to download large files.

A bit irrelevant as it seems like icx-proxy does not certify data anyway (but it would be great to have confirmation at Can the icx-proxy return certified assets?

The boundary nodes are most likely removing the Range header, that’s exactly the issue I was having

Line 390

This is not the nginx configuration on the boundary nodes but its a exact copy. You can check how the boundary is configured with respect to header stripping by looking at this code.

So if it isn’t being stripped, do you think that safari is just not sending it properly?

If it’s getting stripped it doesn’t matter how safari is sending it. It will get stripped

Implementing our own version of a ICX proxy should fix this issue, correct?

@skilesare no, the next step for the issue you are facing would be

-what header is being stripped and where (in the nginx source code i shared)
-motivate inclusion of the header/or workaround the header.

Currently AFK, would love to get your problem resolved under the video streaming effort already underway at Dfinity. there are many ways to do video streaming - the core focus is to select one that doesn’t break boundary node caching

Boundary node caching is absolute must if the IC is to host YouTube scale videos. So it’s not just about simply enabling range headers. What’s your project/contact point I will try to get you plugged into the video effort