Anyone having the same issue? Failed to fetch response on all canisters
Same here, NE USA
Really odd the burn rate spikes since last night
Hello,
Please provide the exact URL and the approximate geographical location you are having problems with.
I tested a few canisters, identity, nns, oc from the PST time zone. They are working fine
-F
SE United States tested about 5 different canisters all the same issues. Also looks like London having issues too
Same errors in Latam region, from Chile
Thanks for the reports folks, we are looking into it.
@Chloros88 @oss @nicopoggi could you please try now?
Seems some calls go through, but we’re also getting some sort of rate limiting or smth causing a “too many requests” error.
Related?
We identified and rolled out a fix, that affected access from certain geographies. namely,
- Latam
- Western Europe
- Southeast US
we checked access from the above locations and confirmed the issue and its fix.
@nicopoggi @oss @Chloros88
unlikely. Fleek hosts many other platforms. The issue we found was returning “Internal Server Error 500”, fleek’s report is about http-429.
Will keep watch.
EDIT: Might be “another” issue faced by @nicopoggi and Fleek
Same here, In LATAM right now and having problems.
Some calls go through but a lot of them respond with a Too many requests error.
I’m still being affected by the issue.
Specifically on a payment-processing call, trying to submit a tx through plug.
Can you post your output for the command? It tells you which boundary node you are connecting to.
# nslookup ic0.app
Are plug and fleek related in some way?
Some calls to my canister work perfectly, but on the ones expecting payment of some sort I get the error.
As far as I know both Plug and Fleek are developer by Psychedelic.
This the output from running the command.
I also get the error when accessing the icdevs site.
@torates we investigated “Too may Request” Http-429 for the nodes you mentioned.
No issue popped up at our end.
Let’s wait for fleek to resolve/update the status at their end. We will coordinate with Fleek to fish this up the stack. Thanks for your help.
The IC itself was actually performing as expected and was not down, however one of the Boundary Nodes (in US-east) which acts as a gateway for the IC experienced intermittent issues that prevented failover. As such users in that geo-area who were DNS routed to that impacted boundary node experienced their requests being dropped by the BN.
Advanced users in the impacted area were technically able to reach the IC by talking to other boundary nodes. The method of targeting specific boundary nodes is a bit opaque, so we will be writing up simple steps on how to do this.
Preventing/mitigating issues like this one are why we are trying to get boundary nodes decentralized as soon as possible.
Thanks a lot for the quick response, to @faraz.shaikh as well.
Would certainly love to learn more about targeting specific BNs.
Cheers!
Why do I need to be an “advanced” user and consciously target/talk to a different boundary node? Why isn’t that kind of regional/node DNS fallback abstracted away and built into the IC’s DNS routing?