Currently the IC does not charge canisters for executed queries simply because it was not implemented at Genesis due to technical challenges. We would like to start exploring query charging in order to ensure that the IC charges for all consumed resources fairly.
We are aware that introducing query charging may have a large impact on dapps with heavy query traffic. We would like to proceed carefully to avoid breaking such dapps and the first step is to ask the community for feedback.
1. Status Quo
Currently, the IC executes more query messages than update messages. However, canisters pay only for updates and do not pay for queries. This contradicts the principle of fairly charging for consumed resources and introduces disbalances to the economic model of the IC. Effectively, canisters with heavy update traffic have to carry the cost of canisters with heavy query traffic. This also introduces wrong incentives to canister developers because they don’t see the cost of slow queries. Since having such disbalances is not sustainable in the long run, we would like to start exploring ways of fixing this issue.
The reason why the IC does not have query charging currently is purely technical: it was not implemented at Genesis due to technical challenges. Queries are executed in non-replicated fashion by a single node machine. This means that in order to charge a canister for executed queries, all nodes need to deterministically agree on the amount of cycles to charge, which is a difficult technical problem.
2. What we are asking the community
First of all, we are curious to learn what you think about the idea of introducing query charges in general. Do you agree that it is a bug in the economic model that needs to be fixed or do you think that it is better for the IC to keep the status quo? Note that if the IC community decides to keep the status quo, since it is not economically sustainable right now, it might be necessary to increase fees elsewhere, for example, fees for update calls.
Introducing query charging may drain some canisters that didn’t account for queries in their cycle burn rate. One of the goals of this post is to find ways to minimize the impact. One idea would be to introduce query charging gradually over several months such that the fees for queries increase gradually from 0% to 100%.
If you have more such ideas, please let us know. We would also like to hear from you if you have concerns specific to your canister.
I personally am completely fine with query charges instead of other fee increase since as I see it a new fee is always easier to justify than increasing existing ones ppl got used to, traffic is traffic after all and node resources usage should be divided equally. This would be good for existing cannisters :
Also it would be good to do it before May’s Bootcamp so it would be crucial for new developers to learn about this.
Hi @xpung, we are aware of the issues with cycle drain attacks and are coming up with options to mitigate.
The key difference between query calls and update calls of course is that query calls are stateless. So even basic “rate limiting” the number of calls isn’t possible to implement in the canister itself (as no counter value can be incremented). One option might be that the IC itself has a mechanisms for canister controllers to configure a per-machine per-canister rate limiter.
@skilesare - that’s also why inspect message is not terribly useful. What logic would be in that inspect message if the query calls we want to protect cannot persist any state?
Hi @rckprtr - such statistics would definitely be very helpful. Making them available externally to canister developers unfortunately is also not very easy: the IC currently doesn’t have a mechanisms to make such per-node statistics available to the controller.
One option might be to roll out the feature in “shadow-mode”, where accounting is being done, but no cycles will be charged. With that, at least, we could aggregate per-node statistics into a single value that is deterministic on each node. That would allow some insights into the actual consumption.
This is also something we are collecting options for and we would like to address.
I would disagree. At the moment nodes are paid by inflation, so essentially all ICP token holders are bearing the cost (of inflation). It is not the (developers of) canisters. In fact, we don’t even know if update calls are paid sufficiently either.
Do we have any candidate solutions to this problem?
I’m interested in hearing from NFT marketplace operators like @bob11. I know they’ve uploaded many GBs worth of assets to the network and I’m not sure if they have the infrastructure in place to keep all of these asset canisters topped up.
In the designs we have envisioned so far, we have paid attention to ease of use for programmers. Previously, for example, we have considered approaches where query charging had to be configured by canister developers, but found them too complicated to use. Instead, we think it is desirable that no extra configuration is necessary from the perspective of developers.
There have already been cases where long running queries by some canisters did affect other canisters on the same subnetwork. The reasons is that the number of query execution threads is limited and we would like to avoid wasteful computation in query calls, on order to provide fairness to other developers on the same subnetwork.
Given the long incremental roll-out, even in the best case we expect it takes at least 6 months to roll out that feature, which is why we have started to think about it now.
It seems very risky to operate a canister without a mechanisms to top up cycle balances regularly. Of course, with query charging, there more events that consume cycles. But even without eat, the cycle balance will reduce, e.g. for storage cost. Am I missing something?
Oh, no, I don’t think you’re missing anything. I was just curious to hear Bob’s take on the matter. I’m not familiar with how Toniq manages asset canisters. All of our (poked) NFT storage canisters are monitored and topped up by our heartbeat canister.
Inspect message would give you a place to deploy anti dos code and reject without cost(or maybe just less cost) Blocking a user abusing a heavy query based on certain criteria would be useful. I don’t honk we have access to much besides principa(which can be regenerated), but perhaps we give more info tot his function? IP, region, etc?
I also think it may be useful to only allow only a select few users to run a specific query. E.g. you could guard something like get_very_expensive_but_detailed_stats with an is_admin guard function so people can’t use it as an amplified DoS, even if it is only a query
Yes, similarly to what @Severin and @skilesare have said, we could perhaps provide a per-node rate limiter where canister controllers could rate limit certain principals (or the anonymous user) to a certain maximum query rate for each function of the canister.
That seems much easier and more plausible than a inspect_message for query calls, where I am not sure what useful logic you could build without carrying over state from one call to another. You could basically only realize binary filters, where you always allow or always reject calls based on user, ip, geo location. What you could not do is count how often it happened before, so you can’t even build a rate limiter in canister code yourself.
The last time I looked at it (I think I was chatting with Manu or Jan about it) it appears there are 2x as many queries as updates for us (I looked at a few subnets where the majority of canisters on the subnet are Entrepot). And query calls were going to be about half as expensive (rough estimate), which would mean charging for queries would result in a 2x in our cycles cost.
We’re also still waiting on a protocol upgrade to further reduce cycles consumption (composite queries aka intercanister query calls within a subnet). And then we are working to reduce cycles consumption on our end with strategic timer setups.
My other concern (as others mentioned) is rate limiting. We have 500+ NFT canisters at this point, with many ICP services hitting these NFT canisters to check for NFT ownership, pull assets, or pull transaction history. Some of these ICP dApps hit our canisters every second (or more).
Happy to continue the discussion of course, and eventually we know this will get turned on.