Just checked. I’m on a different subnet from the one you sent me.
Ah got it. Thanks for sharing. Might be worth to bring this up with the relevant people, I’m not familiar with details on the playground, it sounds like it’s a limitation for relatively common use cases (dynamically creating canisters).
@chenyan Any insights here what would be the blocker to allow such use cases in the Motoko playground?
To avoid cycle stealing, we disabled the call_cycles_add
system API. It cannot dynamically create canisters, due to the lack of cycle transfer.
To support this, the playground backend needs to somehow keep track of the created canisters and claim cycles back when the canister is out of TTL.
@chenyan Do you have any insight regarding a solution to getting the canister to stop?
@Jesse another option for you might be to re-install your canister (so essentially use mode=reinstall
with dfx). It should allow you to re-install a running canister even with outstanding callbacks – although keep in mind that I’m not 100% sure that it’ll work but thought I’d mention this option anyway. Of course this would mean that you’ll lose any stable memory data of your canister, so it’s not perfect still but I’m afraid we don’t have much better options to provide you.
I think I found the issue. When I deployed my upgrade, I called a dfx deploy --network ic
on the backend canister since it was the backend canister that i had upgraded. This, however, was the wrong canister to upgrade. To resolve the issue, I had to revert my backend code back to what it was before I made any changes, then I deployed it. then I made the changes to my backend code then called dfx deploy --network ic
on the frontend canister. this performed the update on both the backend canister as well as the frontend canister. Not sure why upgrading the backend canister alone caused such an issue, but it’s apparently a no no.