[plug CycleOps advertisement]
It should be back online @skilesare. We’ve been dealing with deployment issues for a few weeks, apologies for that.
The first problem started with a replica issue (now resolved), causing some failed deploys and draining cycles in the process
. On top of that, the documentation site has grown to over 29k assets
. That’s not a problem for the Satellite (it’s hosted with Juno) itself since assets are stored and served directly from stable memory, but it becomes one when everything needs to be certified, as the process hits execution limits.
I recently resolved this by exposing a utility to compute the certification tree iteratively. I still needed to write a script to use it, which I did and ran just now.
I’m not aware of any other project holding this many assets in a canister, so part of me wonders whether a single canister is really the right fit for a dataset this size, and the other part of me wonders whether there isn’t something sub-optimal in my implementation for such a scale. I wish I had more time and help to dig deeper into it.
Anyways, the team is aware and this will be addressed properly.
As for CyclesOps @quint, I think everyone just assumed it was already monitoring that canister. It turns out it wasn’t, and @marc0olo only noticed it today. That’s on the TODO list now.
That is a lot of certification to do at one time. I’d imagine if you have some dynamic field in every page you have do the whole thing every time. But if most doesn’t change that much can you stick the trie in stable memory and only update the branches that change? I’ve had to battle similar in motoko and found some nice solutions, but I’m not sure about rust.
Indeed, that’s the behavior. However each build leads to lots of changes currently even though it is reproducible. It definitely needs further investigation.