Evaluating Compute Pricing in Response to Increased Demand on the Internet Computer Protocol

I might not be as technically competent as some of the other guys here. My doubt is what happened to the infinitely scalable claims made by Dom, why are we facing this issue.

Is it infinitely scalable with tnc applied?

ā€œInfinitely scalableā€ refers to the IC being able to scale by adding arbitrarily many subnets; as opposed to the average blockchain, which is a single virtual machine. In the same way that the internet is infinitely scalable; as opposed to my laptop.

It does not mean that the current 37 subnets (hosted by a few hundred machines) can handle arbitrarily high load. They are 37 virtual machines, so they can handle however much load 37 virtual machines can.

3 Likes

Thank you for sharing detailed explanation. This requires some time to read, absorb, and evaluate. Since weā€™re a multi-canister DApp handling large data uploads like rich media, weā€™ll continue utilizing multiple canisters as we scale. I need to assess the impact this fee increase will have on our growth with a large user base, but at first glance, it seems like a some significant hike in fees.

Good pointā€¦but Iā€™m thinking about some items here where the payload may be very smallā€¦maybe even just a ā€œyo look over hereā€ type message with a very small instruction.

Again, agree with the premise. But again, I will point out that given the current cycle fees I donā€™t think that a subnet can break even on costs.

Of course, we want everything to work out and for the system to be self-sustaining. Upon a night to sleep on it I think some kind of software solution exists here at a base layer where canister authors can have the system query the current cost graph and operate off of thatā€¦this limits adding new categories to the cost graph, but hopefully breaks less.

1 Like

Since we are trying to solve the problem more permanently:
Screenshot of the docs (I guess outdated a bit - the 260k base fee) Paying for resources in cycles | Internet Computer


What happens to the base fee of ingress messages? Does it move to 5M as well.
If inter-canister message base fee stays above ingress base fee, then someone can provide cheaper timers & heartbeats off-chain that wake up canisters, flooding boundary nodes.

2 Likes

I really like this thought!

Most of these proposed changes make sense, especially considering that these cost increases are targeted towards actions that cause the largest latency & scalability hurdles on a per subnet basis.

Hereā€™s my impressions of the side-effects (intended or not) from of each of the proposed increases, as well as a additional few suggestions/areas targeted at subnet scalability.

Increase the message base fee from 590K to 5M cycles.

Side effects:

  • Makes event-based inter-canister messaging systems more expensive
  • Incentivizes sending fewer, larger payloads
  • Pushes developers to centralize logic into a single canister (fewer microservices/less parent ā†” child communication)
  • Canisters will move to debounce/batch calls more (batch ICRC-1 & 2 endpoints would be nice to haves)

Edit: I see now that in inter-canister calls the calling canister pays the fees

Increase the instruction fee from 0.4 to 1 cycle.

Side effects:

  • Makes computationally heavy apps (AI initiatives, etc.) ~2X more expensive to run.
  • Makes heartbeat significantly more expensive & incentivizes timer usage

Increase the canister installation fee from 0.1 to 0.5 trillion cycles.

Side effects:

  • Developers build more single/few-canister architectures instead of MMC architectures that dynamically spawn tens of thousands of canisters.

Additional ideas to consider:

  • Sandbox usage fee - In last weekā€™s performance & scalability session, the team went over the improvements to the scheduler, as well as associated sandbox memory limits with plans to support 10-15k canisters in sandboxes at a time. It might sense to design a fee for operations that incur a sandbox overhead (activation and/or eviction). These types of fees would encourage usage of a smaller number of canisters, rather than frequent activation & eviction of tens of thousands of canisters. Then you already have message & instruction fee increases that ensures a canister is paying a fair amount for compute while it is activated and occupies sandbox space.

  • Raising the idle canister fee - as Bjorn mentioned above, subnets with 90k+ canisters have a significantly lower finalization rate than other subnets. However, with performance cliffs at around 100k, even with the canister increase bump it takes ~50k USD for an attacker to fill up a new subnet an important canister is on (ICPSwap, etc.) with fresh canisters that do essentially nothing. Many large project teams have valuations in the millions, whereas these spin-up costs are just a one time fee. Most subnets have at least 20k canisters since September, so this one-time payment number to get to 100k canisters per subnet is reduced to ~40k USD. Raising the idle canister fee means the attacker would need to continue paying the protocol to keep up all canisters in operation.

3 Likes

The current proposal does not suggest changes to the pricing for ingress messages (user-to-canister) at 1.2M cycles or Xnet messages (canister-to-canister) at 0.26M cycles. Please note that the fees for ingress or Xnet messages are cumulative with fees for update message execution. For a detailed example, see here.

2 Likes

This would essentially be congestion pricing. I.e. when subnet load increases, costs increase. Weā€™ve so far avoided this (with the arguable exception of storage reservation fees when close to the subnet size limit) because it makes it harder for canister controllers to predict running costs. And, for as long as there is no reasonable canister migration path, also unfair to canisters that happen to find themselves on busy subnets (when a similar canister with similar load on the next subnet over gets charged a fraction of what your canister is charged).

Not saying itā€™s a bad idea (far from it), just that weā€™ve stayed away from it and we will likely keep staying away until thereā€™s a good argument to be made that (at least to some extent) itā€™s up to you to avoid these congestion fees.

IMHO, this is to a large extent a limitation of the implementation. If we were to only schedule, certify and charge active canisters (with idle ones merely along for the ride as busy storage and (mostly) immutable part of the certified state), then performance would, to a very large extent, scale directly with active canisters. At the very least, we should make a decent effort in that direction before we start penalizing idle canisters.

3 Likes

In this example, it looks like both canisters, A & B (callee & caller) are charged the base message execution fee (590K), with the reply update execution message fee being equivalent to the base message fee.

Whereas this part of the docs suggests that only the caller pays the fees.

I wasnā€™t aware of a reply update message execution fee. Is that considered a message base fee that would also be increased in this scenario?

In general, Iā€™m against base update message execution fees (or them being raised) for the callee canister. Especially for central hub/service canisters like a price feed oracle, those services are more economically viable if the base message/reply update execution fees are paid for by the caller/calling canister.

Yes, correct both canister A & B are charged the base message fee in the example. In general, whenever a canister executes an update message, it gets charged the update message execution fee.

The linked documentation section specifies ā€œIn canister-to-canister messages, the sending canister pays the message transmission costs.ā€ which is correct and not a contradiction to the above. The comments only specifies the handling of message transmission cost from one canister to another and not the message execution.

When the sending canister receives a reply, it must execute the reply, which incurs charges for an update message execution, including the base fee.

Thereā€™s currently no way for the caller canister in a canister to canister flow to reject incoming calls in a way that avoids the message fee.

Have you considered raising the base fee as mentioned for the calling canister, but dropping the reply update message execution fee completely, resulting in a 10X - 1X = 9X net cost increase?

Otherwise this hinders third party canister service providers/aggregators from spinning up, and canister to canister calls within the same app receive 2X the effect of the messaging cost increases.

The message fee is charged because every message (whether itā€™s a request or a response) is a separate transaction and requires a non-trivial amount of set up and tear down (including making a copy of the canister state to restore to in case the message traps). The whole point of the message execution fee is to pay for all this set up and tear down work.

Itā€™s unfair to charge the same amount for a request that terminates in the same round, without making any downstream calls; as for a request that makes hundreds of downstream calls and handles hundreds of responses, each as a separate transaction. And if you instead simply include the response execution fee into the canister call fee, then you havenā€™t really changed anything.

Iā€™m not suggesting that A pays for everything downstream that B does, more just the direct base message fee of the callee.

Take the following two scenarios to demonstrate how base fees could be handled.

  1. Canister A calls Canister B, B terminates in the same round and returns.

A gets charged the message base fee of both A and B, B is not charged a base fee,

  1. Canister A calls Canister B, during Bā€™s execution it calls Canister C and D, then returns.

A gets charged the message base fee of A and B, B gets charged the message base fee of C and D. C and D are not charged a message base fee.

Is this possible? Would it be difficult to implement and what might the drawbacks be?

If there are flaws or is not the desired pricing approach, then I could definitely see a model moving forwards with the new scalable messaging model where more key 3rd party service canisters accept cycles as a payment mechanism for calling an API.

Are there any additional overheads if more canisters adopt the approach of attaching cycles to messages from an accounting perspective, say where hundreds to thousands of messages per second with cycles attached are sent?

OK, I see what youā€™re saying. I donā€™t see a technical reason why the caller couldnā€™t pay the message execution fee of the request they just sent; and have everyone pay the message execution fees for incoming responses, heartbeats and timers. It also makes some sort of sense. Except for a bit of extra complication (instead of just charging for every message execution, we would have to check what kind of message this is first); and canisters would still pay the message execution fee for ingress messages, so there would be a significant cost difference between handling ingress messages and handling canister requests.

As for attaching cycles to calls, there is no overhead associated with it. Itā€™s a number (usually zero) that is subtracted on this side and added on the other. We were just discussing yesterday about cycles and best-effort messages: if a best-effort message times out ā€œon the wayā€ (i.e. not in the originatorā€™s output or input queues), then the cycles will be lost. The conclusion so far is that this is acceptable for small amounts; and for larger amounts we need to think of a protocol (involving either guaranteed messages or some sort of intermediary) to be used for guaranteed delivery of larger amounts of cycles.

2 Likes

DFINITY plans to include the proposed changes to the base fee and instruction fee in the replica version that it will propose on Friday. You can see the precise changes in this commit.

DFINITY plans to include the canister creation fee change a week later.

6 Likes

Does this mean a sudden influx of users would most likely crash the subnet/network entirely. Something we saw with Bob.

Tomorrow if a app suddenly goes viral n one million people try to login, IC wonā€™t be able to handle? Rather weā€™d have to increase it gradually?

Canister devs still need to deal with all kinds of rough edges. Weā€™re addressing them as they pop up. But once we have enough of the pieces in place and we get significant load pushing significant growth, the protocol allows us to scale up to a point where not even a bunch of huge applications experiencing a sudden influx of users (or bots) will bring down the system. Itā€™s just that weā€™re not there quite yet.

3 Likes

I see, so itā€™s a gradual process of discovering new ways to fail n fix them. Fair approach

I am opposed to increasing cost as a solution to network congestion. There must be better ways to solve this problem that need to be explored. If anything we should be finding way to keep the costs even lower while maintaining recourses for application. I personally would like to see this proposal rolled back

1 Like