Cycle burn rate heartbeat

Honestly I think a way to create custom heartbeats should be provided by default by the ICP, its a feature needed for many use cases and in my opinion we shouldn’t rely on a 3rd party service that has to be trusted and somehow funded to do something so basic.

4 Likes

That is exactly the spirit of open services. Why shun away from it?

1 Like

Because in my opinion having to pay to use a basic feature like custom heartbeats is just stupid, the fact its not offered by the “framework” by default and we as a community have to gather and discuss how and who will implement it even more, it looks really bad from an outside perspective. Like imagine if you were introduce a friend of your to ICP:
-“Hey how can I define a custom heartbeat?”
-“Well you can’t, the community has been discussing it but still no ETA and you’ll have to pay for it, if you don’t want to wait you can do it yourself and spend lot of cycles everyday”

Most would laugh, I welcome the nature of web3 but we should be building new stuff not a web3 version of setTimeout.

2 Likes

What’s the difference between that and any other feature of the IC? You pay to compute. It costs X cycles to store a value in stable64 memory, it costs Y cycles to call the management canister, it costs Z cycles to call the (community provided) cron canister. Computation as a service is the entire model.

I’m not saying that a custom-length heartbeat would be bad - there is a line between stuff the IC should provide and stuff it shouldn’t, and I’m not sure which side I think this falls on - but scheduled execution and consensus every second for potentially every canister on a subnet does have a computational cost, and the cycle cost is meant to represent that.

The difference is you pay cycles for what you actually use be it computation or storage, if I want to run a function once every 24 hours and I have to pay for useless calls every second, that is stupid. If it were a very niche use case then I’d agree with you, but this is something lots of dApps need. I want to use the IC to build new stuff not reinvent the wheel.

1 Like

So who’s gonna pay for keeping track of the schedules?

Ideally nobody, it should be part of the protocol, wouldn’t it be better performance wise to have a system level canister (or more not sure about scalability requirements) take care of scheduling instead of having hundreds of user level ones wasting CPU cycles every second for no reason?

1 Like

I think a lot of people agree that a better solution needs to be available for working with heartbeat functionality. The difference in opinion seems to be regarding who gets to design, build and maintain that canister. Is it dfinity - through developing a system level canister as you say? Or is it the community with open sourced, blackholed services that can be audited.

There are pros and cons with both approaches, IMO. System one would be easier for the devs, but it would take resources, it would take time, and it would probably be a single approach system. On the other hand, if people come up with many variations, and publish them on github, license them permissively, and blackhole the canisters, it will probably be faster to test, reasonably “safe”, many possible standards, etc. Eventually from many one standard could evolve and dfinity could “adopt” it, either directly or through a perpetual grant or whatever.

1 Like

I agree but I’d like if progress were a bit quicker cause while we discuss on what’s the better approach, who’s going to do it and how is it going to be funded, there are devs who need the feature and either have to pay more than they actually use or wait for a solution to be released, whenever that happens.
I just want to avoid another “token standard” scenario, the community had 1 year to discuss it and we all know what happened with that, Dfinity had to step in to somehow figure out the mess that it had become and as a result the whole ecosystem suffered from it.

1 Like

If your goal is quick progress at the expense of the best solution, that is a goal built for community-made canisters. Those can be iterated upon and refined. But to embed something in the protocol is to make a serious support commitment; it has to be the version developers can use for everything and implementors can support forever. ‘Move fast and break things’ doesn’t work so well in that context. The ‘mess’ of the token standard as you put it is primarily due to several mistakes in existing standards - mistakes dfinity is very capable of making itself, and if we had officially centralized on a standard with those mistakes a year ago we’d never have heard the end of it.

1 Like

When I talk about quick progress I don’t mean I want the feature to be delivered tomorrow, what I’d like is to have active conversation and a general idea on how the issue is going to be solved, if the ETA to do it right is 6 months so be it, at least I know in 6 months I’ll have a solution.

What I see instead is a very common use case that has been ignored for over a year and hasn’t made much progress even in the concept stage, let alone implementation, this thread is almost 2 months old and it was inactive for 12 days until I posted, yet so far we still have no idea on what’s gonna happen.

3 Likes

I agree with @Zane. It’s not even just building, auditing, and blackhole-ing a public heartbeat canister that’s a problem. It’s also convincing enough people to start using it such that the costs get spread out enough to make it worth it.

I’d be in favor of charging the canister storage cycles for the replica having to maintain the desired cron schedule of the canister. That would still be better than the status quo, as the current costs are ridiculously high and don’t make sense IMO.

1 Like

FWIW my heartbeat canister (which doesn’t do inter-canister calls) burns around 90 B cycles every 12 hours, so roughly 0.18 T cycles every 24 hours.

This is lower than @PaulLiu’s tip jar canister, which apparently burns ~0.5 T cycles every 24 hours.

I guess it also depends on how much work is being done in your heartbeat function.

FYI, I just found out where the main cost in having a heartbeat handler comes from: it’s the 590K cycles charged per message execution (since a heartbeat is considered to be a message/transaction).

Considering a block rate of about 1.1/s (was looking at a couple of subnets that pretty much execute only heartbeat messages), this comes out to about 55B cycles per day or 20T cycles per year. 20T cycles is the cost of 5 GB of storage for a year (which seems like much); or just 7 hours of 100% CPU usage.(which seems quite cheap).

Personal opinion: I would see the use of a system provided mechanism to execute heartbeats at lower frequency (in blocks or seconds) mostly as a way of spreading out the execution of said heartbeats over time. I.e. instead of (very likely) all heartbeats trying to run all at once at midnight or on the hour, (because a canister developer is most likely to code it along the lines of time() % interval == 0) they would run every N rounds or every M seconds, but with a random offset (e.g. computed from the canister ID). Something like that may offset the cost of implementing and maintaining the feature.

6 Likes

Personal opinion: I would see the use of a system provided mechanism to execute heartbeats at lower frequency (in blocks or seconds) mostly as a way of spreading out the execution of said heartbeats over time. I.e. instead of (very likely) all heartbeats trying to run all at once at midnight or on the hour

Exactly! It frees up more CPU cycles on the nodes to do other stuff instead of wasting resources running stuff that’s not needed.

Yes and no. It saves CPU cycles for canisters, but there still needs to be logic somewhere to decide for each heartbeat handler whether it should run that round or not. So we would be merely moving said logic out of canisters and not charging them for it.

As said, for me the main benefit would be that (as opposed to developers likely all choosing to execute periodic logic at the same time, causing latency spikes) the system could randomly spread out periodic heartbeats and the load that comes with them. I.e. it would only guarantee that the heartbeat is called once an hour (e.g. at 23 past) rather than once an hour on the hour.

I would like to chime in on this. At Entrepot we run a few update calls every heartbeat for each NFT canister, and so our cycles burn rate is about 0.5T cycles per day per NFT canister. We are maintaining about 130 canisters at this point, so we are burning through around 65T cycles per day. If we had a nice configurable heartbeat cron (and could run every 10 seconds instead of every second), we could immediately cut our costs by 10x.

We are currently thinking maybe we just use an external (centralized) cron service to call functions regularly rather than rely on heartbeat because of the costs right now. Our cycles burn per canister before heartbeat was closer to 0.05T cycles / day.

A few other notes:

  • yes, we should just push this cost on NFT creators, but we don’t have good tooling for that yet
  • yes, we could optimize our heartbeat update calls, but not by THAT much I don’t think
  • generally, we just need a nice cron service on the IC, and I don’t care where it is. Could be system level (my strong preference), but even a community cron service would work. We might just build one for ourselves and then open it up to the community, but if anyone else is working on it that would be awesome
5 Likes

How bout make the heartbeat wake up every 10 blocks instead of every block on the whole network? to start for now.

FYI: Heartbeat improvements / Timers [Community Consideration]

4 Likes

0.5 TC a day works out to almost $0.70 per day, which is roughly $21 per month or $252 per year.

I tried to reproduce those costs for a simple heartbeat canister.

I created a simple benchmark to measure a heartbeat cost for empty canisters written both in Rust and Motoko, see GitHub - maksymar/heartbeat-cost

Taking into account a median finalization rate of 1.09 blocks/s (or 917 ms per block) I got the following results after ~15 minutes of measurements:

  • Rust – ~21 TC/year
  • Motoko – ~71 TC/year

Calculation for Rust

  • execution_cost = update_message_execution_fee + instructions_to_cycles(0) = update_message_execution_fee per single heartbeat call
  • with update_message_execution_fee = 590_000 it converts to 590_000 * (1_000/917) * (60*60*24*365) / 10^12 = 20.29 TC /year, very close to the measured ~21 TC/year

Motoko implementation has some extra code with sending a message which results in higher cost.

Without seeing the initial code it’s difficult to explain 182 TC/year cost.
Maybe it executes a heavy heartbeat payload on every call. In that case I’d suggest not executing heartbeat payload on every call which may reduce the cost x2.5 times for Motoko or x8.5 times for Rust.

4 Likes