No, that is an open topic. I know that we need one or more archive canister and one or more index canisters to allow search for various keys. But work on that hasn’t started.
New tokens can be found by querying ftInfo for the whole range of asset ids. But the ledger is not a source for what those tokens actually are. You can see the controller of the token and a self-description which of course isn’t trust worthy. The ledger does not store the token’s symbol. That has to come from somewhere else from a trusted source.
You mean if a new token type is introduced other than ft? For example, a type for NFTs?
Fee modes can be changed and new ones introduced later, including one that works with a separate fee token so you don’t have to pay in the transferred asset itself (in which case the cost will naturally be independent of the transferred amount and would reflect the actual raw processing cost). But if we pay a fee denominated in the transferred asset itself then my question would be “why not”? Why not make it proportional to the amount? Why not let the ledger make some profit on fees that go beyond the actual cost? Or, put differently, why not take from large transactions and subsidise small ones if it is in an order that large transactions won’t care about?
I manually submitted some transactions through the frontend while the test was ongoing to check for impact on latency. The time to completion was often unchanged, the normal 6-7s, but I also saw some outliers that took 20s. I guess that’s the nature of it when cross-subnet traffic gets busy.
Thanks for your detailed reply! I’ll certainly be following along and giving it a go when I’ve got a couple of minutes.
On the topic of fee/ tokenomics - I feel like this sometimes gets in the way of cool tech/ ideas moving the topic away from the core engineering. I’ve seen too many crypto projects make a mess of cool stuff by getting the tokenomics/ fees wrong (ICP/ ETH are perhaps both good examples). I am interested in this area and I think sustainable ledgers are needed. I think a flat fee (adaptive if needed) based on cycle costs and small % profit would be far better than a fee based on tx value. It seems a bit like income tax and nobody likes that!
Yes, that is exactly the limit that I meant by “ingress messages per subnet is the bottleneck”. I was conservatively calculating with 400-500 tps as the limit here. The 970/s was measured by DFINITY in a test environment. I was going with half of that which is also something that I could replicate myself on mainnet with reasonable effort and without having to push it.
There is also a limit imposed by message executions per canister which comes from the time it takes to load the wasm module. That might be somewhere around 700/s. If the ingress limit is indeed as high as 970/s then it would make sense to deploy two aggregators per subnet. Because then we remove the number of wasm invocations as the limit by splitting the ingress message over two canisters.
Anyway, for the time being I will work with a conservative estimate of 400-500/s per subnet and leave tricks to squeeze more out of it for later.
What do you mean by “Is there any advantage over this”? There is simply no alternative. That’s what it takes to make canister calls with one-hop cross-subnet calls in the back. That finality is still faster than any competing blockchain. The latency can also be hidden in the frontend. Because the moment a transfer is accepted by an aggregator it is guaranteed to be processed by the ledger. It can still fail because of insufficient funds, but for certain transfers the frontend may know that the funds are there, so it can already show success to the user after the submission to the aggregator is made (after ~3s).
That finality is still faster than any competing blockchain.
What exactly is a competing blockchain? I thought finality 8 seconds was in the slow category, Sorana’s is about 1 second, Avalanche’s about 2 seconds.
My intention is to ask what are the advantages of HPL that would overcome the slow finality. The following immediately come to mind.Is there anything else?
Unlimited scale is possible.
Multi-tokens
Ability to do atomic swaps with transaction execution in Ledger Canister
We sustained approx. 5k tps for 6h which burned cycles at a rate of approx. 900 TC der day or 10B cycles per second. That is about 2M cycles per transaction.
The IC public dashboard currently shows 28B cycles/s after we stopped this test. That is not from us. During our test the IC dashboard peaked at 40B cycles/s which included our ~10B cycles/s.
According to this table, ingress message reception costs 1.2M cycles plus 2k cycles per byte in the message. Not sure which are bytes are counted though (payload only or request envelope). Then x-net byte transmission costs another 1k cycles per byte. So if an ingress message is 200 bytes and the forwarded part to the ledger is 100 bytes then that would make for a base cost of 1.2M + 0.4M + 0.1 M = 1.7M per transaction without counting cost of instructions. That seems to match the observed cost of approx. 2M per transaction.
UPDATE: More precisely we have for the cost in cycles:
1.2 M ingress message reception (+ bytes)
0.59 M ingress message execution (+ instructions)
We measured the aggregator instructions per submission at 20k instructions, which cost 8k cycles. So that makes 1.2M + 0.59M + 0.008M = ~1.8M cycles. Which leaves ~0.2M cycles for the bytes.
Thanks for pointing that out. I checked several sites and found that the finality time is indicated as ~1s and 12s. It is not a Solana document, but there seems to be an “optimistic confirmation” mechanism on vasa’s site that can shorten the finality.
Thank you. I did not know that. Another thing that was mentioned is using the front end to improve the user experience by making things feel more snappy. Many projects on ICP do not take advantage of that. There is no need for the user to watch itTransaction load up for 8 seconds. Optimistically executed and then if it feels have an error pop up.
Yes, that sums it up pretty well. We accept one additional hop which seems to be adding about 4 seconds of latency in exchange for the ability to scale horizontally. This in turn unlocks the ability to host multiple tokens without risk of one “noisy” token degrading the user experience of other tokens. Hosting multiple tokens unlocks atomic swaps.
I knew the cycle spike would be you testing ! So fees on the ledger MUST be set to cover at least 2M in cycles - there is no way that any dev other than Dfinity could sustain 900TC per day. The cycle burn is mind blowing… but I suppose that at 5k tps the adoption would be mind blowing as well
EDIT - It really gives some interesting problems for 221Bravo trying to index this volume of transactions. We’d have to roll out an equally horizontal setup of index canisters working in ‘teams’.
EDIT EDIT - Rough maths… 5k tps is 300,000 tx per minute. If 10,000 txs ~ 2mb (seems about right for ICRC ledgers) This would require 30 calls a minute to be requested from HPL if block size doesn’t change. That’s not too bad however when taking into account processing time (30 seconds+) and time it takes for messages to travel - a call every 2 second becomes a bit more tricky.
None of this is an issue with HPL and is stuff for the 221Bravo team to think about… but I’m curious if you’ve got any thoughts on blockchain explorers and how they would integrate/ keep up with HPL. Say there are a number of blockchain explorers all querying hundreds of thousands of transactions on the ledger (split across 2mb blocks) - would the canister ingress que be a bottle neck?
I suppose getting 100k tps into a ledger is one thing… but I imagine there could be a requirement to get an equal amount of data back out on the other side - wallets, blockchain explorers etc will all likely be spamming calls to HPL. Do these need to be batched as well?
Yes, what goes in must also be able to come out. But that shouldn’t be a problem. If 2 MB per block can come in then 2 MB per block can also come out.
I think that downstream from the ledger must first come an archive canister. It takes in the stream of all ledger transactions from a fire hose, i.e. in large batches. In fact, it must probably be a set of archive canisters because the amount of data is enormous. 10k tps is 0.86 billion per day. That means multiple GB of data every day.
The archive would be part of the HPL which means it would have exclusive access to the fire hose. From the archive onwards the data is then made public. It is provided to indexers and additional mirrors, etc. From the archive onwards the data has to fan out because no single canister can provide all the services.
With a multi-token ledger you can also build a multi-minter. That’s a single canister that can wrap all ICRC-1 tokens and mint them on HPL. All that is required to bring an existing ICRC-1 token to HPL is a canister call to the multi-minter that takes the canister id of the token’s ledger (the ICRC-1 ledger) as an argument. No need to deploy a new minter every time you want to bridge a new asset to HPL.
There has been some discussion of latency in the thread above. So we decided to run a test which is submitting one transaction every 5s to a random aggregator. There is no other load, so we get the pure latency from the consensus and message routing layers of the IC without any effects from congestion.
We can see the latency, averaged over 10 minutes, starts out somewhere around 7.1 second.
Then we activated a feature designed for improving latency. The problem with heartbeats is that they always run at the beginning of an execution round. Hence, from within a heartbeat you can never forward the transactions that come in in the same execution round. You can only forward the ones that came in in the previous execution round. If the heartbeat were to run at the end of an execution round then that would allow us to forward transactions in the same execution round in which they arrived, reducing overall latency by exactly one execution round. The feature that we activated essentially simulates a heartbeat at the end of each execution round. It does so by using self-calls. Of course it only works under small and medium load, when the execution rounds are not full, because otherwise the self-calls could slip into a different execution round.
We can see the effect in the graph. The latency drops to somewhere around 6.1 seconds. So in summary we can say that the HPL has on average 6s latency under small load.
It should be noted that all aggregators used in this test are indeed on different subnets than the ledger, as would be the case in a deployment at scale. It is of course possible to deploy one aggregator on the same subnet as the ledger. That does not interfere with incoming batches because they live in separate parts of the block space. In each block there are 2MB reserved for cross-subnet traffic (which contains the batches going to the ledger) and another 2MB are reserved for ingress traffic (which contains the transactions going to the aggregator on the same subnet). So the “local” aggregator would not interfere with the ledger. Since the aggregator is “local” the latency then drops from 6s to about 4s. So there is at least room for some transactions at 4s latency which could be made available to everyone at times when the overall load is low (when one aggregator is enough, i.e. when the whole HPL runs under 400 tps) or which could be made available for a higher fee to users who need it.
This image was taken with an aggregator on the same subnet as the ledger. The average is somewhere around 4.2 seconds:
From the networking team side I we can help with the following.
Currently you were hitting the rate limiter on the boundary nodes. Each boundary node allows only 300 ingress messages per second. There is no reason for such conservative limit given the MB/s is actually what matters. Looking at your workload your ingress messages are fairly small. We will increase the limit to 1000 ingress messages per second in the coming days and I will ping the thread when this is done.