2 days ago, a canister hosted on this subnet had 41 T cycles. Today, the canister was frozen due an insufficient cycles balance. I checked the internet computer dashboard to find that the cycles burn rate for the subnet spiked signficantly for 2 hours and as a result drained the cycles balance of the canister thats running on this subnet. The canister that was drained completely has a timer method that fires once every 15 minutes.
From this, I surmise the cost of the timer methods was increased for a period of approximately 2 hours yesterday. Is this the case? if so, why? and if not, what caused this burn rate to spike so significantly without warning? and is this going to be a regular occurrence?
sporadic fluctuations in computation costs such as the one that occurred yesterday make it difficult to build sustainable solutions on this subnet.
Maybe this will be followed by a huge spike in memory usage again (which surpassed limits and required numerous proposals to compensate). This occurred on this subnet (nl6hn, and a few others) a month or so ago - more details on this proposal review thread.
We have:
16_336 used canisters,
1_841 empty hot canisters awaiting new user signups and
6_348 backup blank canisters without WASMs installed
on that subnet.
Where is the code specifically? Where can I verify the canister wasm hash? Please just tell me what repo is used for the canisters on that subnet and I’ll work it out myself.
Also how many canisters do you have across all subnets, and what percentage of those are tied to actual users? Bear in mind you have an audience with an short attention span and an acquisition funnel where 90% will drop off within seconds. (you told me that yourselves)
Thank you for the feedback.
We’re evaluating other mechanisms to make this work. This is all part of our effort to build a scalable backend that can effortlessly scale to millions
There was no change to fees from our side. When we do such changes we announce them in advance and certainly they would not happen for a period of time and then taken back again.
and if not, what caused this burn rate to spike so significantly without warning?
The burn rate you point out is the total burn rate of the subnet. Other canisters on the subnet burning lots of cycles (e.g. if DOLR canisters were being upgraded or something like this, our metrics indicate large number of install_code requests around the time) could result in a spike.
I would recommend you double check what your canister is doing in its timer. Or whether your canister got a lot of ingress messages that would also result in burning through some cycles. I don’t believe your canister running out of cycles was related to the spike, it seems coincidental to me.
You are intentionally filling subnets with canisters under the guise of “one canister per user” for an app that has no users. This is a lie. Your goal is to make it unpredictable and frustrating for other developers to build on the IC.
THIS is why you’re working with George Bassadone.
THIS is why you’ve set up multiple SNSs to drain the ecosystem
You’re part of a larger group trying to destroy the IC and you’ve just failed.
Now why don’t you make like a tree and get out of here.
@dsarlis I’m gonna double check today, but I currently have the same canister running on multiple subnets. The subnet linked in the OP is the only subnet where the canister is draining over night.
Is the case that this subnet is more expensive than others?