Cycle burn rate heartbeat

It’s sounds like what you really want is a hook into when a canister reaches the freezing threshold so that you can provide a method that will get called where you could top up the cycles.

1 Like

Do you have any numbers, e.g. average cost per call? At a 2s interval, it could be 1/2 of the cost, since heartbeat runs roughly every 1s.

Or help us implament ICDevs.org - Bounty #17 - A DAO for Cycles - $10,000 - ht: cycle_dao

Yeah if there was a way to trigger a function once the canister doenst have enough cycles to process the call, that would be a great addition

That is exactly what https://tipjar.rocks does. It will maintain a canister’s cycle balance to the average of last 10 days, and refill as needed.

1 Like

I wish there was an analogous inspect_message for heartbeats, where a canister can choose to accept or reject a heartbeat call and not pay any cycles if it chooses to reject it… I don’t think this exists though.

3 Likes

Honestly I think a way to create custom heartbeats should be provided by default by the ICP, its a feature needed for many use cases and in my opinion we shouldn’t rely on a 3rd party service that has to be trusted and somehow funded to do something so basic.

4 Likes

That is exactly the spirit of open services. Why shun away from it?

1 Like

Because in my opinion having to pay to use a basic feature like custom heartbeats is just stupid, the fact its not offered by the “framework” by default and we as a community have to gather and discuss how and who will implement it even more, it looks really bad from an outside perspective. Like imagine if you were introduce a friend of your to ICP:
-“Hey how can I define a custom heartbeat?”
-“Well you can’t, the community has been discussing it but still no ETA and you’ll have to pay for it, if you don’t want to wait you can do it yourself and spend lot of cycles everyday”

Most would laugh, I welcome the nature of web3 but we should be building new stuff not a web3 version of setTimeout.

2 Likes

What’s the difference between that and any other feature of the IC? You pay to compute. It costs X cycles to store a value in stable64 memory, it costs Y cycles to call the management canister, it costs Z cycles to call the (community provided) cron canister. Computation as a service is the entire model.

I’m not saying that a custom-length heartbeat would be bad - there is a line between stuff the IC should provide and stuff it shouldn’t, and I’m not sure which side I think this falls on - but scheduled execution and consensus every second for potentially every canister on a subnet does have a computational cost, and the cycle cost is meant to represent that.

The difference is you pay cycles for what you actually use be it computation or storage, if I want to run a function once every 24 hours and I have to pay for useless calls every second, that is stupid. If it were a very niche use case then I’d agree with you, but this is something lots of dApps need. I want to use the IC to build new stuff not reinvent the wheel.

1 Like

So who’s gonna pay for keeping track of the schedules?

Ideally nobody, it should be part of the protocol, wouldn’t it be better performance wise to have a system level canister (or more not sure about scalability requirements) take care of scheduling instead of having hundreds of user level ones wasting CPU cycles every second for no reason?

1 Like

I think a lot of people agree that a better solution needs to be available for working with heartbeat functionality. The difference in opinion seems to be regarding who gets to design, build and maintain that canister. Is it dfinity - through developing a system level canister as you say? Or is it the community with open sourced, blackholed services that can be audited.

There are pros and cons with both approaches, IMO. System one would be easier for the devs, but it would take resources, it would take time, and it would probably be a single approach system. On the other hand, if people come up with many variations, and publish them on github, license them permissively, and blackhole the canisters, it will probably be faster to test, reasonably “safe”, many possible standards, etc. Eventually from many one standard could evolve and dfinity could “adopt” it, either directly or through a perpetual grant or whatever.

1 Like

I agree but I’d like if progress were a bit quicker cause while we discuss on what’s the better approach, who’s going to do it and how is it going to be funded, there are devs who need the feature and either have to pay more than they actually use or wait for a solution to be released, whenever that happens.
I just want to avoid another “token standard” scenario, the community had 1 year to discuss it and we all know what happened with that, Dfinity had to step in to somehow figure out the mess that it had become and as a result the whole ecosystem suffered from it.

1 Like

If your goal is quick progress at the expense of the best solution, that is a goal built for community-made canisters. Those can be iterated upon and refined. But to embed something in the protocol is to make a serious support commitment; it has to be the version developers can use for everything and implementors can support forever. ‘Move fast and break things’ doesn’t work so well in that context. The ‘mess’ of the token standard as you put it is primarily due to several mistakes in existing standards - mistakes dfinity is very capable of making itself, and if we had officially centralized on a standard with those mistakes a year ago we’d never have heard the end of it.

1 Like

When I talk about quick progress I don’t mean I want the feature to be delivered tomorrow, what I’d like is to have active conversation and a general idea on how the issue is going to be solved, if the ETA to do it right is 6 months so be it, at least I know in 6 months I’ll have a solution.

What I see instead is a very common use case that has been ignored for over a year and hasn’t made much progress even in the concept stage, let alone implementation, this thread is almost 2 months old and it was inactive for 12 days until I posted, yet so far we still have no idea on what’s gonna happen.

3 Likes

I agree with @Zane. It’s not even just building, auditing, and blackhole-ing a public heartbeat canister that’s a problem. It’s also convincing enough people to start using it such that the costs get spread out enough to make it worth it.

I’d be in favor of charging the canister storage cycles for the replica having to maintain the desired cron schedule of the canister. That would still be better than the status quo, as the current costs are ridiculously high and don’t make sense IMO.

1 Like

FWIW my heartbeat canister (which doesn’t do inter-canister calls) burns around 90 B cycles every 12 hours, so roughly 0.18 T cycles every 24 hours.

This is lower than @PaulLiu’s tip jar canister, which apparently burns ~0.5 T cycles every 24 hours.

I guess it also depends on how much work is being done in your heartbeat function.

FYI, I just found out where the main cost in having a heartbeat handler comes from: it’s the 590K cycles charged per message execution (since a heartbeat is considered to be a message/transaction).

Considering a block rate of about 1.1/s (was looking at a couple of subnets that pretty much execute only heartbeat messages), this comes out to about 55B cycles per day or 20T cycles per year. 20T cycles is the cost of 5 GB of storage for a year (which seems like much); or just 7 hours of 100% CPU usage.(which seems quite cheap).

Personal opinion: I would see the use of a system provided mechanism to execute heartbeats at lower frequency (in blocks or seconds) mostly as a way of spreading out the execution of said heartbeats over time. I.e. instead of (very likely) all heartbeats trying to run all at once at midnight or on the hour, (because a canister developer is most likely to code it along the lines of time() % interval == 0) they would run every N rounds or every M seconds, but with a random offset (e.g. computed from the canister ID). Something like that may offset the cost of implementing and maintaining the feature.

6 Likes