The reason for my assumptions was Dom and other guys suggested people to move from web2 clouds to IC hence competing on costs becomes a criterion for discussion. But I understand building on IC might not be for everyone, especially the one’s competing on prices.
IC has different use cases, which imo isn’t going to be as widely adopted as web2 clouds.
It depends how you measure. This table certainly forgets about storage cost, where AWS would be cheaper than ICP. I also wonder how they compute OUT cost. If you do it via queries, right now it’s free. If you do it via update calls you’ll have some, but not too much cost
Awesome, thank you! I’m sure a bunch of DeAI projects (and many others of course) will be super thrilled about these news
Actually, if you find the time, could you speak to any other benefits (besides increased heap memory) that the support for Wasm64 will bring? That’s a question we wondered about in the DeAI group the other day and thought would be interesting to know more about
I guess some compilers might be able to generate faster code for Wasm64 if they optimize for 64-bit.
On the Wasm engine side, I should mention that we expect that memory accesses will slower compared to 32-bit. That’s because on Wasm32, the Wasm engine can use page protection to elide bounds checks on accesses. On 64-bit this optimization doesn’t work and there will be bounds checks.
Do you have an example code that compiles to Wasm64 that we could use for benchmarking?
@ulan, I’m curious: since these differences in the engine are not directly observable to programs running on the IC, do you intend this to translate to different cycle costs for 64-bit memory accesses?
If the performance difference is large enough, then we will have to reflect that in cycle costs to follow the general principle that the cycle costs should be close to the real costs.
Pushing this back up as AI on the IC gets a lots of excitement. This would need to be done even before / making GPU subnets.
“I wonder what the plan is to increase heap memory. With DeAI, which I’ve been eyeing, the LLM upload seems to hang due to lack of heap memory.
To increase heap memory beyond 4GB would require a change to 64-bit Wasm, which would be a daunting task, but a plan should be in place.”
Related question to the new 400GB stable memory limit: what is the memory limit of a subnet? How many canisters can live on the same subnet that each utilise the max stable memory?
So…what happens when you are the second canister that tries to use the limit? Unfortunately I might have to start explaining this much differently, in practical reality it doesn’t seem like each canister can have access to 400 GiB of storage, more like 36 canisters could do this right now assuming they’re all deployed to different subnets.
Controllers can opt out by setting reserved_cycles_limit to zero. Such opted-out canisters would not be able to allocate from the newly added 250GiB, which means that these canisters will trap if they try to allocate storage when the subnet usage grows above 450GiB.