Hashed Block Payloads

I agree with you. The IC shouldn’t rely on other networks for things it can do as for those use cases it is already better suited. It might be valuable to integrate with other networks for things that the IC cannot do (yet).

I agree that Dfinity should focus on the long term vision of delivering a world computer and not compromise. But until then, imo the questions are:

  • are there current limitations of the IC that make devs’ lives harder or make them rely on centralized infrastructure?
  • if that is the case, is it a problem worth solving?
  • if so, are there solutions that we can develop in the short-medium term or should we just wait for the IC to take care of them?

I agree with Sam and the 1000s other people that said this before

3 Likes

Any update on hashed block payloads @yvonneanne @Manu?

Hey @lastmjs! This is still being worked on, and it’s on the roadmap under the “stellarator” milestone, which we hopefully achieve this year. We are exploring many options and collecting extra metrics from mainnet (to see eg how often replicas today already have the ingress messages from a block). The core idea is still the same: don’t always broadcast the full block, but rather exchange ingress messages beforehand and only send hashes of ingress messages in the block, such that the bandwidth is used more effectively. Note that our plan now is to initially focus more on throughput (so ingress bytes per second that a subnet agrees on) and not on supporting very large ingress messages.

I think we’re close to having an approach that we believe could work, so then we will write it up and share in more detail.

5 Likes

Super super excited for this!

Any tentative thoughts on the throughput increase we could expect?

Hi @Manu this topic of hashed block payloads was a major point of discussion in our DeAI WG meeting today. The current limits on file upload throughput is a big factor for devs uploading bigger LLM models into canisters before they can be run. HBP would likely make a huge difference for this use-case.

1 Like

@lastmjs The first goal would be reaching ~4MB/s of ingress throughput, after that we can set the bar even higher of course. Conceptually with the approach of having only ingress hashes in blocks, we should be able to fully utilize the bandwidth nodes have.

Thanks for the input! Yeah i can imagine, we also experienced this when uploading the full bitcoin UTXO set. So hopefully improvements are coming soon :).

2 Likes

That’s what I’m talking about! Wow that would be amazing.

On the question of increasing the message size limit, I believe I have a promising solution. Basically embracing sockets.

I call the idea socket-based canisters: x.com

Essentially each canister would have a socket that could be read and written to. The data would be read and written in chunks, stores in memory across writes. Once the chunks have a parseable message, be it Candid RPC, HTTP, SSH, etc then the message would be interpreted. This could allow messages of arbitrary length, and hopefully we can incorporate this functionality at the Wasi level allowing for close to off-the-shelf HTTP or other functionality.

2 Likes

I thought this was already the throughput limit? I believe in our file uploading implementation of Azle we’ve capped things at about 4 MB/s.