We’ve spun up several (a handful) of canisters in order to do just this so far in the past with other services (CycleOps) .
On a different app, icptopup.com, we currently set a limit of 100 canisters topped up per action per user.
We have a service failure limit on our canister top up API of 400 canisters concurrently being topped up to ensure that calls don’t fail due to output queue limitations (500), and are just now starting to hit the 200-300 concurrent calls range.
We could potentially spin-up additional canisters to do this same work, but this would just result in additional latency for the end-user, and wouldn’t change the impact of canister being targeted on the other side of the equation.
Alternatively, we could implement back pressure inside of our canister and hold onto requests in a queue for longer, periodically checking back in before enqueuing them. But this again, would result in additional latency for the end user.
However, especially with the recent performance improvements enabling the new scalable messaging model, a rate limit increase would be a much simpler solution. Are there any chances of an incremental bump in queue size to 750 or 1k?