Ingress Expiry Issue - Your Input Needed!

Hi everyone :wave:,

Over the past few weeks, there have been several reports on the forum from devs encountering ingress expiry issues, such as the following:

Error: ReplicaTimeError: Specified ingress_expiry not within expected range: Minimum allowed expiry: 2024-09-25 13:57:00.580809774 UTC, Maximum allowed expiry: 2024-09-25 14:02:30.580809774 UTC, Provided expiry: 2024-09-25 13:57:00 UTC

While we provided some valid explanations and future improvements in various threads yesterday (for example here), we’ve continued to analyze the issue. After further discussions and debugging, we believe we may have found a potential issue that was introduced in agent-js version v2.1.0.

We’re not 100% certain this is the root cause of the problem, but we think there’s a possibility. That’s why we’re requesting your help to test our hypothesis.

If you are not using any of the new features introduced in v2.1.0 or later, could you try downgrading your version of agent-js (and related libraries, identity, etc.) to v2.0.0 and let us know if the issue is resolved?

For example:

npm rm @dfinity/agent @dfinity/principal @dfinity/auth-client
npm i @dfinity/agent@2.0.0 @dfinity/principal@2.0.0 @dfinity/auth-client@2.0.0

Please note that if you are using any ic-js libraries (@dfinity/ledger-icp etc.), you can force npm to remove and downgrade the agent with the flag --force. While this is not an elegant solution, ic-js should remain compatible with the latest versions of those libraries.

Thanks a ton for your help!

2 Likes

Hey, the same issue in Rust ic-agent = "0.37.1"

"The replica returned an HTTP Error: Http Error: status 400 Bad Request, content type \"text/plain; charset=utf-8\", content: Specified ingress_expiry not within expected range: Minimum allowed expiry: 2024-09-27 13:42:52.887177739 UTC, Maximum allowed expiry: 2024-09-27 13:48:22.887177739 UTC, Provided expiry:        2024-09-27 13:42:50.878267 UTC

I tried to add expire_at, but it did not help…

let now_utc = OffsetDateTime::now_utc();
let five_seconds = Duration::seconds(5);
let time = now_utc + five_seconds;

agent.query(&ledger.id, "get_transactions")
            .expire_at(time)
            .with_arg(...))
            .call_without_verification().await

I am developing ledger scanner and it has to constantly call query_blocks or get_transactions . I noticed that every ~20 minutes this issue appears. My program automatically restarts broken process and requests work fine for ~20 minutes.

Hi @dantol29. Do you mind sharing which subnet you are deployed to while experiencing this issue? Thanks

Hi, I am not calling it from a canister. Is that what you mean?

Using only 5 seconds is providing a very tight window for your ingress message to be accepted. I suggest you try 5 minutes instead, so like this

let now_utc = OffsetDateTime::now_utc();
let five_minutes = Duration::seconds(5 * 60);
let time = now_utc + five_minutes;

agent.query(&ledger.id, "get_transactions")
            .expire_at(time)
            .with_arg(...))
            .call_without_verification().await
1 Like

Changed to 3 minutes) Atm, it is working for 12 hours without any ingress problems. Gonna wait till it hits 24 hours and let you know :grinning:

2 Likes

Now after 24 hours I can confirm that this solution works

let now_utc = OffsetDateTime::now_utc();
let three_minutes = Duration::seconds(3 * 60);
let time = now_utc + three_minutes;

agent.query(&ledger.id, "get_transactions")
            .expire_at(time)
            .with_arg(...))
            .call_without_verification().await
3 Likes

Heads-up: A new version of agent-js (v2.1.2) has been released. This version reverts a feature (see release notes) and should resolve the potential issue. There’s no need to roll back to v2.1.0; you can try using the latest version. Let us know if it works for you.

1 Like

Follow-up on kind of the same topic: https://forum.dfinity.org/t/subnets-with-heavy-compute-load-what-can-you-do-now-next-steps/35762?u=peterparker

It’s actually two different error messags if I get it right but, given that multiple threads are open, thought about linkind it here.

2 Likes