Cycle burn rate heartbeat

Yes, or even self-calls although that is less safe - IC Cron - let's schedule some tasks bois - #3 by nomeata

1 Like

It’s way cheaper to do this instead of using heartbeat. I moved to running a script on a server that calls an upgrade method on my canister every 2 seconds because of costs. It would be nice if we could reduce heartbeat costs to make it as cheap as calling the canister from the “outside”.

5 Likes

I have this hacky system I’m my ICDevs bounty roadmap. A dao that owns this “scheduler” public utility would be a fun project.

Maybe I’ll code it up for supernova.

Interesting problem and needed data:

How many fire and forget intercanister calls can fit into one round of consensus?

Do we need guaranteed delivery? Rouge canisters probably break in line processing of returns….would need to use @nomeata ’s upgrade pattern.

How do the economic play out? How much for 10,000 notifications, etc.

It’s sounds like what you really want is a hook into when a canister reaches the freezing threshold so that you can provide a method that will get called where you could top up the cycles.

1 Like

Do you have any numbers, e.g. average cost per call? At a 2s interval, it could be 1/2 of the cost, since heartbeat runs roughly every 1s.

Or help us implament ICDevs.org - Bounty #17 - A DAO for Cycles - $10,000 - ht: cycle_dao

Yeah if there was a way to trigger a function once the canister doenst have enough cycles to process the call, that would be a great addition

That is exactly what https://tipjar.rocks does. It will maintain a canister’s cycle balance to the average of last 10 days, and refill as needed.

1 Like

I wish there was an analogous inspect_message for heartbeats, where a canister can choose to accept or reject a heartbeat call and not pay any cycles if it chooses to reject it… I don’t think this exists though.

3 Likes

Honestly I think a way to create custom heartbeats should be provided by default by the ICP, its a feature needed for many use cases and in my opinion we shouldn’t rely on a 3rd party service that has to be trusted and somehow funded to do something so basic.

4 Likes

That is exactly the spirit of open services. Why shun away from it?

1 Like

Because in my opinion having to pay to use a basic feature like custom heartbeats is just stupid, the fact its not offered by the “framework” by default and we as a community have to gather and discuss how and who will implement it even more, it looks really bad from an outside perspective. Like imagine if you were introduce a friend of your to ICP:
-“Hey how can I define a custom heartbeat?”
-“Well you can’t, the community has been discussing it but still no ETA and you’ll have to pay for it, if you don’t want to wait you can do it yourself and spend lot of cycles everyday”

Most would laugh, I welcome the nature of web3 but we should be building new stuff not a web3 version of setTimeout.

2 Likes

What’s the difference between that and any other feature of the IC? You pay to compute. It costs X cycles to store a value in stable64 memory, it costs Y cycles to call the management canister, it costs Z cycles to call the (community provided) cron canister. Computation as a service is the entire model.

I’m not saying that a custom-length heartbeat would be bad - there is a line between stuff the IC should provide and stuff it shouldn’t, and I’m not sure which side I think this falls on - but scheduled execution and consensus every second for potentially every canister on a subnet does have a computational cost, and the cycle cost is meant to represent that.

The difference is you pay cycles for what you actually use be it computation or storage, if I want to run a function once every 24 hours and I have to pay for useless calls every second, that is stupid. If it were a very niche use case then I’d agree with you, but this is something lots of dApps need. I want to use the IC to build new stuff not reinvent the wheel.

1 Like

So who’s gonna pay for keeping track of the schedules?

Ideally nobody, it should be part of the protocol, wouldn’t it be better performance wise to have a system level canister (or more not sure about scalability requirements) take care of scheduling instead of having hundreds of user level ones wasting CPU cycles every second for no reason?

1 Like

I think a lot of people agree that a better solution needs to be available for working with heartbeat functionality. The difference in opinion seems to be regarding who gets to design, build and maintain that canister. Is it dfinity - through developing a system level canister as you say? Or is it the community with open sourced, blackholed services that can be audited.

There are pros and cons with both approaches, IMO. System one would be easier for the devs, but it would take resources, it would take time, and it would probably be a single approach system. On the other hand, if people come up with many variations, and publish them on github, license them permissively, and blackhole the canisters, it will probably be faster to test, reasonably “safe”, many possible standards, etc. Eventually from many one standard could evolve and dfinity could “adopt” it, either directly or through a perpetual grant or whatever.

1 Like

I agree but I’d like if progress were a bit quicker cause while we discuss on what’s the better approach, who’s going to do it and how is it going to be funded, there are devs who need the feature and either have to pay more than they actually use or wait for a solution to be released, whenever that happens.
I just want to avoid another “token standard” scenario, the community had 1 year to discuss it and we all know what happened with that, Dfinity had to step in to somehow figure out the mess that it had become and as a result the whole ecosystem suffered from it.

1 Like

If your goal is quick progress at the expense of the best solution, that is a goal built for community-made canisters. Those can be iterated upon and refined. But to embed something in the protocol is to make a serious support commitment; it has to be the version developers can use for everything and implementors can support forever. ‘Move fast and break things’ doesn’t work so well in that context. The ‘mess’ of the token standard as you put it is primarily due to several mistakes in existing standards - mistakes dfinity is very capable of making itself, and if we had officially centralized on a standard with those mistakes a year ago we’d never have heard the end of it.

1 Like

When I talk about quick progress I don’t mean I want the feature to be delivered tomorrow, what I’d like is to have active conversation and a general idea on how the issue is going to be solved, if the ETA to do it right is 6 months so be it, at least I know in 6 months I’ll have a solution.

What I see instead is a very common use case that has been ignored for over a year and hasn’t made much progress even in the concept stage, let alone implementation, this thread is almost 2 months old and it was inactive for 12 days until I posted, yet so far we still have no idea on what’s gonna happen.

3 Likes