Yeah, I also had similar problems when trying to interact with only IPv4 servers.
Another follow-up: https://forum.dfinity.org/t/subnets-with-heavy-compute-load-what-can-you-do-now-next-steps/35762
We also saw this error, when deploying to mainnet using dfx v0.24.0.
Specified ingress_expiry not within expected range: Minimum allowed expiry: 2024-10-05 22:11:19.931428435 UTC, Maximum allowed expiry: 2024-10-05 22:16:49.931428435 UTC, Provided expiry: 2024-10-05 22:11:00 UTC
The error was consistent every time we tried. It immediately went away when downgrading to dfx v0.23.0.
Our canister sits on subnet: e66qm-3cydn-nkf4i-ml4rb-4ro6o-srm5s-x5hwq-hnprz-3meqp-s7vks-5qe
We’re using dfx v0.22.0, but the ingress expiry issue persists — are you having a better experience with v0.23?
For you, please review this:
Can you upgrade your agent and report back on what happens? Can you please remind me what subnet are you on? This would very much help with troubleshooting.
FYI: This is a known issue on the subnets listed here:
That’s not OP issue (I had contact in PM). Let’s hope the new replica improves issues a bit.
We’re using agent v2, but the ingress expiry issue remains…
And yes, our current subnet is among the failing ones: 3hhby-wmtmw-umt4t-7ieyg-bbiig-xiylg-sblrt-voxgt-bqckd-a75bf-rqe
@jennifertran
The error “Specified ingress_expiry not within expected range” has nothing to do with the subnet load, but with the time the agent uses when submitting requests.
@xalkan
can you give me a call example where you keep seeing this behavior so I can try to reproduce?
The agent-js has been patched. Has long as version v2.1.1 is not used, there is no issue on that side to my knowledge.
It isn’t easy to reproduce because it does not happen every time. It is kind of random, but I can see in the tyron.io app logs that it keeps happening:
The issue lies in the seconds timeframe; could you use rounded minutes instead?
To try to reproduce this call, you could use the get_account
function in DFINITY Canister Candid UI, e.g. get_account("bc1pce7w7gspc04uvj4ddnfu5k30tpc9h4ekt88tt6ekvt66prdhjt8s90max0", false)
, or use the UI at tyron.io connecting your UniSat wallet.
Thanks, Xalkan. I have been able to reproduce this successfully. Right now, I don’t fully understand what is happening there. I’ll try to talk to some more people about it, but not all of them are awake yet, so don’t hold your breath
News
We found a bug in the agent-js implementation.
Consider a call c
with requestid_c
and expiry_c
.
The agent currently keeps using expiry_c
when making read_state
requests for requestid_c
instead of using expiry_r
representing current time + some DELTA.
We are working on a fix.
On subnets with low load that’s usually no issue, because typically the response for the request has been computed before the difference between those times becomes problematic. On subnets with high load, it can happen that a call c
has been received and even started processing (both before the ingress expiry expiry_c
), however, in the meantime the subnet time is greater than expiry_c
and thus read_state
requests must use a higher expiry expiry_r
.
So my initial reaction that this has nothing to do with load was wrong. This agent behavior would only be seen on highly loaded subnets.
Wait, is that another new additional bug in agent-js, or are you talking about the one that was ‘fixed’ by reverting a change from v2.1.1 and released in v2.1.2 last week?
This is a “new” (has been present for a long time) bug. This is for the case where greater than 4 minutes (or whatever the expiry was set to) have elapsed since an update was submitted, and the read_state
request polling itself will start to time out, without ever learning whether the update
was successful or rejected
Gotcha. Thanks for the clarification.
Solving it also won’t resolve the general issue of system
time being out of sync with subnet
time. The specific bug Yvonne just described is scoped to read_state
requests and polling, and wouldn’t address initial queries
or calls
being immediately rejected due to replica drift or system time misconfiguration.
For that, we’ll need a way to get the latest certified subnet
time without first agreeing on that time, and then using it to construct the expiry. Preparing to solve this broader problem is where the 2.1.1 bug was introduced.