Calculate an estimate of cycles consumption: a universal example

Note: I previously created another thread also regarding the topic of estimating cycles consumption. The previous thread is focused on measuring cycle consumption, whereas this thread is about calculating an estimate. I encourage you to check out the previous thread as well.

The process of estimating cycle consumption is very complicated and it is difficult to make a one-size-fits-all solution. That being said, I have tried to come up with a pretty universal example that covers most of the cycle transactions. If we manage to provide an analysis of the cycle consumption for this example, I think it will be of great value for future developers to estimate the costs of their canisters.

This table will be used to determine cycles consumption for different operations.

The setup
We have two canisters: main and secondary. A client makes an update call to entry() in the main canister with a payload of 100 kB. entry() invokes another function, priv(), with no payload. priv() does something arbitrary and returns a boolean. entry() makes an inter-canister call to the secondary canister’s store() containing the 100 kB payload. store() then stores the 100 kB, and we assume that the total memory usage to store the payload in the secondary canister becomes 100 kB.

I have visualized the example in a flowchart below.

The costs
Here I’ll add the cycle costs for the example.

[Removed, see post below]

Missing pieces
Please help calculate the missing costs!

  • Compute Percent Allocated Per Second
  • Xnet Call
  • Xnet Byte Transmission
  • Ingress Message Reception
  • Ingress Byte Reception
  • ?

Let me know if we should modify the example to provide a better analysis.

5 Likes

Appreciate the analysis, was wondering when the primary canister calls store in the secondary canister, is the primary canister spending cycles, or is it fully on the secondary cycle end?
Basically does an initial canister calling another canister spend cycles for non-query methods?

Good question. My understanding is that the primary canister spends some cycles for the inter-canister call to the secondary canister. In particular, I expect a fixed cost of 260,000 cycles and 1,000 cycles for for every byte sent.

@chenyan could you chime in on this and the original post in general (or ping someone else appropriate)?

Right, the primary canister will send some cycles to the secondary canister when making inter-canister calls. The secondary canister use the provided cycles to execute the code and return the unused cycles back to the primary canister.

1 Like

Interesting! This is new information for me.

Suppose that there is one primary canister that creates many child canisters and all interactions are done via the primary canister, that in turn makes inter-canister calls to the children. Does this mean that the child canisters only need to be supplied with cycles for the installation and storage (if the primary canister pays for the computation)?

Child canisters have to pay for themselves. Sending cycles from caller to callee is optional. Also canisters consume cycles even if they are not called (although not very much if they don’t use a lot of storage).

I recently asked a similar question internally:

For inter-canister calls who is paying? If we have a three canisters in a call chain A → B → C , does A have to pay the whole bill?

and @dsarlis answered like this:
A pays for the sending request-response pair between A and B and B pays for sending the request-response pair between B and C. Each canister also pays for processing that happens locally — so A pays for processing the response from B while B pays for processing the request from A.

2 Likes

Great, then it is as I initially thought. I got the wrong impression from chenyan’s answer that the caller would automatically provide cycles to the callee. Thanks for clarifying!

I feel like the documentation for cycle consumption is lacking. As someone that is designing a self-sustaining protocol on the IC, cycle consumption is crucial to me. I don’t see much interest from developers about this yet but I assume that will change when Open Internet Services are available, the number of users grows, and long-term sustainability becomes more relevant.

Do you know where I can turn to get help with this?

2 Likes

I totally agree with you. However, the documentation is lacking at many places and I think we need help from the community to write documentation, blog post, tutorials and example applications or even frameworks. (Tagging @ais )

It would be great if you could dig deeper and make your learnings available to the community. If you know what you’re looking for or have specific questions, we can support you here in the forum or on Discord.

We can also always help to provide some financial compensation in the form of bounties or grants

I have been trying to dig into this for a while now using the forum as well as the Discord. While I have learned much, the participation in this thread and in the Discord has been pretty low so far and my other thread has not reached a solution yet. Now I am kind of stuck and don’t know how to proceed.

I already have a grant project (in which this rabbit hole is out of scope) that is eating my time so unfortunately I don’t have time to take on a dedicated project regarding this as of now.

Just want to add that I totally understand that there are infinitely many other things that DFINITY has to work on, but maybe this should be prioritized more? With IC’s gas model and the direction it is heading I think this is crucial for us developers to get more insight about.

@gabe As @domwoe said, our documentation is indeed lacking in this case (well other cases too but let’s focus on cycles consumption that you mentioned). I can also refer you to our community conversation last fall where we covered how canisters are charged cycles on the IC. Probably not 100% complete (e.g. we didn’t cover how cycles transfers work between canisters) but still a good reference point for you and the community in general.

1 Like

Since there are a lot of eyes on this topic, I’d like to tag along with a question regarding inter-canister transfers. The fact that a caller in a c2c call also pays some cycles (besides paying for the call as an WASM OP) was news to me. Could someone please go into details on what exactly can we expect in the following scenario?

  1. Main heartbeat canister
  • pay for heartbeat() functionality
  • pay ? how much? fixed value ? for inter_canister calls to all canisters that need a heartbeat functionality say every minute. Is this intended to simply cover the costs of “returned data”, or is it dependent on what the called canister does in remote_heartbeat?
  1. Receiving remote_heartbeat() canister
  • pays for it’s own computation inside remote_heartbeat()
  • uses some of the cycles from (1) ? how much? Can we model this, or is it computation dependent?

bonus question: does this behavior change when using the new notify_* API? Would that ensure that there’s no additional cycle cost?

I looked a bit through the code. Highly recommended! The code and comments trump the documentation :slight_smile:
So here’s what I found. @dsarlis please correct me, if I get a wrong impression.

I don’t think you pay for the heartbeat functionality itself, but you pay for the instructions inside your heartbeat function just as you would if this would get called from another canister. (I think) you can think of it as being called from the management canister. But it adds up, because you need to pay every round (edit: invocation).
See Heartbeat for details.

Concerning inter-canister calls: Looking at the fee handling of requests in the Cycle Account Manager, the caller has to pay the following:

  • The network cost of sending the request payload, which depends on the size (bytes) of the request.
  • The max cycles max_num_instructions that would be required to process the Response.
  • The max network cost of receiving the response, since we don’t know yet the exact size the response will have.

So you pay a lot upfront. But at some point you should get a response from the callee and then the actual accounting is done inside execute_canister_response

I’m not exactly sure how this works with inter-canister calls inside heartbeat. But I guess, although you can’t “await them” in the heartbeat (edit: This is wrong, you can use await inside a heartbeat), the call context is created nevertheless and at some point the response is received and executed. I don’t think it would be a good idea to do inter-canister calls in every heartbeat :slight_smile:

However, in any case, I don’t think the fees of your heartbeat canister depend on the internals of the remote_heartbeat function. Only if it would never return.
Except you intentionally send some cycles for the callee to use.

Yep, I’d say so.

I’m not sure how this works exactly and this seems to be handled somewhere else in the code. It seems this is handled on application level and not at system level. So the canister developer can decide.

Another thing I wondered: The documentation mentions Xnet Calls and Xnet Byte transmission. Shouldn’t this be inter-canister calls or really only cross-subnet calls? Are there additional costs for Xnet calls in comparison to inter-canister calls on the same subnet?

4 Likes

I don’t think anything changes when using the notify API (at least for now). There were no changes to the underlying system. There should still be a response by the callee, but the response will trap immediately. I assume the cycle accounting will done although the executions traps, but would be good if someone could validate.

I talked to @gabe a bit more over Discord. There we also dived into the “idle” costs, i.e. what a canister pays if it doesn’t get called.

If we have a look into the scheduler code, we see that charge_canisters_for_resource_allocation_and_usage is called in every round of execution. This function calls charge_canister_for_resource_allocation_and_usage for every canister. Here we can see two parts

  1. Memory allocation and usage costs
  2. Compute allocation costs

First, if you create a canister and don’t set memory_allocation and compute_allocation to a specific value, these values will be zero by default. We’ll come back shortly to discuss what they mean.

So, if there’s no resource allocation, then charge_for_compute_allocation will charge nothing and we’ll charge only for the actual used memory of the canister

But what’s compute and memory allocation?

Compute allocation
If there are more tasks to be done than a single round of execution can perform, we need to decide which canister may execute and which has to wait. This is one of the main jobs of the scheduler. It will sort the canisters by priority according to some scheduling strategy. The priority is based on the compute_allocation setting and on how long the canister is already waiting to be scheduled. Hence, a non-zero compute_allocation gives you a higher priority but still does not guarantee execution in a given round. However, you will have to pay a fee in every round for this higher priority.

Memory Allocation
You can reserve storage on a subnet by setting the memory_allocation, then you’ll get charged as if you would use all of this storage.

2 Likes

@domwoe has been doing a great job here from what I’m reading. I’ll try to give some more colour on certain aspects that were maybe still unclear.

(I think) you can think of it as being called from the management canister

Not sure that’s the best analogy (mostly want to avoid any confusion that there’s some extra inter-canister call involved here). I’d say you can think of it as being triggered by the IC periodically.

But it adds up, because you need to pay every round.

It usually means you pay every round (because your heartbeat is usually called on every round) but it actually depends on the scheduling (which you have explained in a later comment).

I’m not exactly sure how this works with inter-canister calls inside heartbeat. But I guess, although you can’t “await them” in the heartbeat, the call context is created nevertheless and at some point the response is received and executed. I don’t think it would be a good idea to do inter-canister calls in every heartbeat :slight_smile:

I’m not sure I follow what you mean by “you can’t await in the heartbeat”. You can definitely perform inter-canister calls and await them in your heartbeat. Current execution will be stopped at the await point and will be resumed when you get the response invoking the appropriate callback. It does mean that you should be careful to not send too many requests from your heartbeat if you’re still waiting on previous ones (i.e. paraphrasing your comment that it’s not a good idea to do inter-canister calls on every heartbeat) but apart from that there’s nothing special on the system level if you make calls from your heartbeat. As far as cycles consumption is concerned, again, no difference. Caller has to pay just like you described thoroughly above.

Re the remote_heartbeat: Indeed, the canister will pay for its own computation inside the remote_heartbeat(). And it doesn’t affect your main heartbeat canister.

That said, you can, if you want, include cycles in the call you send to remote_heartbeat. Then, the remote_heartbeat canister has a choice to accept some or all of those cycles (accepting means they get added to its balance) and use those cycles however it likes (any cycles not explicitly accepted would be returned to the original caller). E.g. to send more requests to other canisters, or pay for the current or future computations and so on. But this is not necessary to happen. It’s up to the application to decide if it wants to require cycles on the requests to process them.

Another thing I wondered: The documentation mentions Xnet Calls and Xnet Byte transmission. Shouldn’t this be inter-canister calls or really only cross-subnet calls? Are there additional costs for Xnet calls in comparison to inter-canister calls on the same subnet?

The fee is the same for inter-canister calls on the same or cross subnet. I think we probably picked Xnet as the more “general” term but the idea is that the fee is the same and it should be amortised to capture both same subnet and cross-subnet communication to simplify things. Also, in the fee description in the documentation you linked we do use inter-canister calls (but I guess it was still not clear enough, maybe we should fix it).

I don’t think anything changes when using the notify API (at least for now). There were no changes to the underlying system. There should still be a response by the callee, but the response will trap immediately. I assume the cycle accounting will done although the executions traps, but would be good if someone could validate.

Correct.

Re the “idle” costs, a small correction for posterity: The charge is not happening every round, we actually do it periodically on some interval. Here’s the relevant config param.

5 Likes

Thanks a lot @dsarlis for taking the time and clarifying my post!

This was a misunderstanding from my side and I can’t find what triggered this misunderstanding :slight_smile:
Ok - we can do async calls inside the heartbeat function just as everywhere else.

I’d understand inter-canister call as the more general term and Xnet call as a special inter-canister call across subnets.

1 Like

Huge thank you to @domwoe for digging into this, and also @dsarlis for the clarifications.

So I’ve used the information gathered to fill in the missing pieces and demonstrate how cycles are consumed in an example scenario. I try to break each cycle of the consumptions down and explain what they’re for and from where it is deducted.

Setup
Same as the original post but I’ve decided to ignore the priv() function since it does not add anything to the example where.

Assumptions
We assume that the Wasm code of both canisters is 10 kB in size and that the Wasm heap, and global variables are 0 in size for simplicity. We also assume that stable memory is 0 in size initially. The number of Wasm instructions for each function is assumed to be 10. Each function is assumed to return a payload of the same size as in the request. We also ignore the size of any data structures, i.e it is assumed that the memory allocation to store, e.g, 10 kB of data is 10 kB.

Furthermore, we assume that the reserved compute allocation and the reserved memory allocation is set to zero (which is the default). If you want to check this in your canister you may do so with dfx canister status.

Initial cycle consumption
Create canisters

  • 100,000,000,000 cycles deducted from the deployer’s wallet for Canister Created, i.e creating the main canister
  • 100,000,000,000 cycles deducted from the deployer’s wallet for Canister Created, i.e creating the secondary canister

Memory & compute allocation

  • 127,000 * 10*10^-6 cycles per second deducted from main for GB Storage Per Second, i.e the main canister’s code storage (10 kB)
  • 127,000 * 10*10^-6 cycles per second deducted from secondary for GB Storage Per Second, i.e the secondary canister’s code storage (10 kB)
  • 0 cycles per second for Compute Percent Allocated Per Second, since we do not have reserved this (see assumption above)

Dynamic cycle consumption
Update call

  • 1,200,000 cycles deducted from main for Ingress Message Reception, i.e the user’s update call to entry()
  • 1,200 * 100,000 cycles deducted from main for Ingress Byte Reception, i.e the 100 kB payload in the user’s update call to entry()

Execution

  • 590,000 cycles deducted from main for Update Message Execution, i.e a fixed cost for the execution of the update call to entry()
  • 4 cycles deducted from main for Ten Update Instructions Execution, i.e executing 10 Wasm instructions in entry()

Inter-canister call (request)

  • 260,000 cycles deducted from main for Xnet Call, i.e the inter-canister call from main to secondary
  • 1,000 * 100,000 deducted from main for Xnet Byte Transmission, i.e the 100 kB payload in the request of the inter-canister call
  • 1,000 * 100,000 deducted from main for Xnet Byte Transmission, i.e the 100 kB payload in the response from the inter-canister call (remember that we assumed response size = request size)

Execution

  • 590,000 cycles deducted from secondary for Update Message Execution, i.e a fixed cost for the execution of the update call to store()
  • 4 cycles deducted from secondary for Ten Update Instructions Execution, i.e executing 10 Wasm instructions in store()

Memory allocation

  • + 127,000 * 100*10^-6 cycles per second deducted from secondary for GB Storage Per Second, i.e the stable memory in the secondary canister becomes 100 kB (note that this is in addition to the previous memory allocation)

Inter-canister call (response)

  • 1,000 * 100,000 deducted from main for Xnet Byte Transmission, i.e the 100 kB payload in the response from the inter-canister call

Hope this comes in handy for other fellow devs!

11 Likes

This should be required reading for all IC developers. :open_book:

BTW, I think you listed this twice:

1,000 * 100,000 deducted from main for Xnet Byte Transmission, i.e the 100 kB payload in the response from the inter-canister call

One thing that’s interesting is that the callee canister pays for an ingress message, whereas the caller canister pays for an inter-canister call. (I assume that’s because user principals don’t own cycles, so there’s nobody else you could charge for an ingress message. That’s also why ic0.accept_message isn’t necessary for inter-canister calls, I think.)

I’m a little surprised that “Ingress Message Reception” costs 1.2 M cycles. That’s more than twice the # of cycles charged for “Update Message Reception”. At least ingress messages aren’t charged the latter as well I suppose…

Indeed! Wait till you see the bill for the btc integration. I dont mean that in a negative sense. But i think invoking btc integration api will be , necessarily, relatively expensive. Therefore more ICP burn.

2 Likes

Glad you say so! Happy to help fill in a missing piece in the IC docs.

This was intentional. For inter-canister calls, cycles are consumed both for bytes sent in the request and the response. In the assumptions, we assumed that a response is always the same in size as the request and thus we get two costs for 100 kB each (I decided to break it up into two costs to indicate this). I added a comment for extra clarity :slight_smile:

It’s great that you fact-check this, and I hope more people do!

1 Like