As part of my tests, I forked the example repo provided and deployed the frontend and backend on subnet mpubz-g52jc-grhjo-5oze5-qcj74-sex34-omprz-ivnsm-qvvhr-rfzpv-vae which is an application subnet that is not the European subnet.
I was able to call the update method within a few seconds.
I confirmed while testing the same update method on the frontend hengx-riaaa-aaaas-ajw5a-cai with the backend hnonl-haaaa-aaaas-ajw4q-cai (which both are on the European subnet), it stalls.
I’ll circle back up to the team and let you know what we can find as a solution here.
Thank you for your effort, but I don’t think that’s good news. That would mean that any subnet handling more than, say, 100 - 200 transactions per second would no longer be usable for update calls without paying for compute allocation ? And compute allocation could be very expensive for the developer and is also a bit unfaire , because it depends only of the subnet.
In general switching a project to another subnet is also not very easy with all the data. Is there a way to switch the subnet in an easy way with the data stored on the canister ?
I don’t want to imagine what it would mean if the IC gets even more usage than it does now. Hopping and switching the subnet will be a common task.
Have we reached here a dangerous limitation of the IC ?
In my case it is all about GDPR complaint applications. I would like to link the following post and articles about GDPR and ask how it aligns with the current situation:
Don’t get me wrong, but as of now, you can’t practically say that the IC is GDPR compliant if you have to rely on additional compute allocation due to the costs. A 1% allocation will cost approximately $35 per month.
My hope is, there will be a better solution than those you have mentioned.
You are correct that re-deploying to a new subnet would require a new canister. Therefore, the state would be lost. I will check if there is a better way to migrate state over.
Thank you for the feedback and for outlining the pain points. I understand how frustrating this can be. We are reviewing all of the feedback and looking for next steps to improve.
Hey @jennifertran, I wanted to ask and keep this thread active, Is there any new information on that topic. The transactions per second are now below 100 TX/s, and storing a text isn’t even possible anymore?
Interested in this as well, would be nice to see some kind of followup here putting concerns about systemic limitations to rest, or acknowledgment that the current solution is not acceptable with a plan to address it.
Hey folks,
We’ve aligned on a short/mid/long term plans. I’ll provide more details soon. The short term mitigation is currently being implemented and will be rolled out next week.
Hey @jennifertran, thank you very much for keeping me updated. That’s great news, and I hope the short-term changes will restore the previous stability.