you got it fixed?,
it looks to be processing calls and errors
I can do the following:
% dfx canister --network mercury call 23tio-diaaa-aaaal-qjyma-cai get_state
(
record {
owner = principal "jpolj-tumqf-4o37l-v5rl7-onvv3-txk2o-vd75c-kprlz-dxwyr-efsgm-jqe";
max_cycles_per_round = 1_000_000_000 : nat;
hashes_computed = 505_000_000 : nat;
solved_challenges = 0 : nat64;
last_cycles_burned = 1_000_000_000 : nat;
bob_minter_id = principal "yhz26-biaaa-aaaal-qjtsq-cai";
},
)
Ingress messages (meaning update calls from e.g. dfx) likely appear as if they are timing out, because the canister is not scheduled in time. There are so many canisters on the subnet that all want to execute, so canisters easily have 500+ rounds (~5 minutes) before they get scheduled again. So if you send an ingress message to the canister, it’s likely that the canister is not scheduled before the ingress message is removed from ingress history.
By my calculation, 7B cycles/s is 1Trillion cycles every 143 seconds, which is 1 XDR every 143 seconds, or around 1 USD per 2 minutes. Is that correct? If so, that makes around 22,000 USD per month. This is still short of deflationary if the max output is going to stay static at 7B cycles/s and a 13 node subnet consumes 32.500 USD per month in node provider fees, but very different from the picture you paint.
@bjoern what are your thoughts on this? If a subnet just can process 7B cycles per second is not even technically possible to cover the operational cost of this nodes… the limit MUST be increased if we really want a sustainable economic model on the IC, I hope I’m understanding this wrong and there’s no limit in cycle burn rate, I know you already said there’s no limit but you also said
“The ~7B we see right now are probably pretty close to what a singe 13-node subnet is currently burning under full computational load”
I would really appreciate your answer on this
Well we’re at 161B cycles/s on the subnet right now. I wonder how high we can get it!
What do u think that BPS is on 1.76. Dropped from 2.56 block per second
Well they’ve just reduced the notary/block times for subnets which was a huge leap… and then this huge computation-hungry app comes out of nowhere to fill up all the available capacity.
I expect it’s just a load of random scalability issues that require optimisations and not one thing in particular. It’s good though… nice to see organic load that’s outside of a controlled environment.
Bring on the R&D post mortem!
The cycles burned by execution on a subnet may not be sufficient for covering the full rewards paid to the nodes of that subnet. Let me explain why, in my opinion, there is no need to adapt things right now.
- Execution is only one resource that canisters consume and that burns cycles. Other resources are storage, ingress messages, messages to other canisters, canister creation, or system calls like threshold signing. Other resources are not accounted for at all at the moment, such as query calls. We need to consider the whole picture, not only one component.
- The protocol has become significantly more efficient, and will continue to do so. Just the measures I recall right now have increased the execution-related cycle burn of each subnet by at least a factor of 4 since launch. The subnet storage capacity has also more than doubled, and continues to increase.
- The cycles burned and node rewards are currently only a smaller fraction of the ICP tokenomics; voting rewards are still a more significant part. In order to make ICP non-inflationary at some point, we thus need quite a bit of growth such that cycle burn can eventually offset voting rewards (in addition to node rewards).
Summarizing: This is not a major issue right now. Yes, eventually subnets should (more than!) offset the rewards paid to the nodes. But there are still lots of moving variables, such as the efficiency improvements in the protocol. And we think that right now it is more important to focus on the improvement of the protocol as well as adoption.