Subnets with heavy compute load: what can you do now & next steps

Hi @Manu,

It was working fine, but now this error has returned. Is yinp6-35cfo-wgcd2-oc4ty-2kqpf-t4dul-rfk33-fsq3r-mfmua-m2ngh-jqe not upgraded yet?

Still experiencing the ingress expiry issue with yinp6… Will the new replica version fix it?

That error message has nothing to do with ingress expiry due to subnet load (which would happen 5 minutes after you submitted it, not immediately). It simply says that you submitted an ingress message with an expiration time in the past. The wall time on the replica that you sent the ingress message to was 15:53:27 UTC. It was willing to accept ingress messages with an ingress_expiry value set to anything between 15:53:27 UTC and 15:58:57 UTC. Whereas the ingress message you submitted had ingress_expiry equal to 15:53:00 UTC. I.e. it was already expired because you said so. You essentially told the replica “do this for me and give me the response 30 seconds ago, at the latest”.

1 Like

What would you recommend (using the js agent)? Ingress expiry issue

Hi Xalkan
@free is correct, the expiry set in the request you sent to the replica was too old.
What tools were you using which exhibited this behavior? dfx? browser with candid? Maybe it’s due to an old agent, we recently fixed some bugs related to this. If you’re using agent-js check if it’s 2.1.3

1 Like

Hi Yvonne,

It’s a Next.js app using Juno’s core-peer (about to update it to its latest version, which includes agent-js v2.1.3 :crossed_fingers:).

Assuming your machine’s time is synced, then with the update to 2.1.3 the problem should go away. Let us know if it persists.

2 Likes

Yes, understood. I meant “heartbeats and their subsequent update calls” are drowning out the number of ingress messages. Of course, as you say, ingress messages also make subsequent updates and they all appear together in one metric. Anyway, I would still iterate the desire to see the number of ingress messages of a subnet. I mean only the ingress messages themselves, not their subsequent update calls. That will provide valuable information and I can calculate everything else from there. I think it’s an absolute must-have in terms of transparency.

5 Likes

That seems like a reasonable request @timo, i’ll pass on that feedback in dfinity.

Update: this week’s replica versions again improve the situation a little bit, and we are now at a point where we see all subnets process updates in < 10 seconds and typically in < 5. With that, I will update statuspage to remove the “degraded performance”.

DFINITY will keep focusing on ensuring ICP can high load well and proceed with the steps outlined in this forum post.

10 Likes

Can anyone from Dfinity update this page to clear how much cost for canister operations