Subnets with heavy compute load: what can you do now & next steps

No, those are successful Web2 project, hosted on AWS, Azure, GCP, … - and these cloud providers don’t let them being down even for a day. And don’t kick them out when there’s a problem :saluting_face:

I agree that we should classify this as “degraded performance”, thanks for the suggestion.

2 Likes

I think you miss understand the problem, they aren’t comparable. If Github goes down then it causes problems for other projects that rely on them. Github doesn’t cause an issue for another project that is hosting by the same cloud provider if Github has a problem because they’re hosted on separate VPS or compute instances. The two are completely different scenarios.

It’s more comparable to talk about what happens on a shared VPS.

This is about principals, not technicalities, it’s not kids playground, it’s not ‘everyone has a right’ - it’s a business - you don’t kick successful client away, you don’t cripple them - they pay for the services, you support them as much as you can.

1 Like

This is about technicalities though. If you want to say broad statements then why are we bothering to talk about scaling issues at all?

I agree, it’s not a kid’s playground. You’re missing the point I’m trying to make, NOBODY IS KICKING YRAL AWAY. The idea is to MOVE THEM to a DIFFERENT SUBNET. It’s not dissimilar at all from your hosting provider saying “Hey, we’re going to discontinue some of our old servers by (some date), can you please migrate to our new servers”. albeit with a different underlying reason.

Perhaps I could’ve worded it a little different but its an idea in progress and its open to suggestions. And, if you read two sentences below. it clearly states that Yral has the option of deploying to their own subnet. You’re also ignoring the posts further down where I do suggest that Yral is offerred compensation to move. Also, here is a link to a proposal topic on the same idea. Proposal idea : short term fix to the scaling issue - #3 by frederico02

i think that concludes our discussions.

Sorry for this useless comment after the discussion is concluded, please feel free to report it, but this is :sweat_smile: :joy: :rofl:

The point remains, you care more about the language and precise wording i used rather than the idea.

There is no point in blaming each other, I believe that our goal is common - fix scaling and prevent DDOS on ICP - Yral success just exposed the problems, moving Yral anywhere won’t fix the core issues:

1 Like

well said, we both agree we just want what’s best :+1:

1 Like

yes,support, been feeling frustrated recently with failed transactions

we are talking “%100 on chain”, “chain fusion” everyday, without keeping system stable, such a joke

Quick note on DFINITY’s vote on motion proposal 133388: DFINITY voted to adopt this proposal. While the proposal is not super precise and in certain places a bit misleading, DFINITY does agree that certain types of workloads are currently not fairly charged based on the load they incur on the system. DFINITY is in favor of revisiting the price of such workloads.

9 Likes

I also voted yes.

Costs drive behavior, and the incentive must be to design dApps with the best architecture possible.

I do hope that DFINITY will provide guidance on the best possible architecture that minimizes subnet impact and costs.

For example, the on-chain AI app I am building can potentially become a subnet clogger too, and I want to make the best possible choices. Should each user just get their own AI inference canister, or is it better to have shared AI inference canisters?

3 Likes

Have you considered the possibility to require paying some fee for spinning up a canister? E.g. 5 ICP as a deposit per canister?

I guess this will effectively stop the creation of tons of canisters that block the subnet for everyone.

1 Like

There is a canister creation fee, it’s currently configured to 0.1T cycles. But I agree that we should perhaps explore increasing that.

4 Likes
  1. How to setup compute allocation to SNS canisters which are under control of SNS root canister?
  2. Is there any way to pre compute additional cycles required to setup canister compute allocation to 1 (or any other value)?

@Manu sorry i didn’t wanted to disturb you, but you are active at the moment so i thought i can get these questions answered quickly.

Hi @h1teshtr1path1,

The first part of your question:

  1. How to setup compute allocation to SNS canisters which are under control of SNS root canister?

should be covered by this part of the documentation: SNS proposals | Internet Computer

In brief, one can submit a ManageDappCanisterSettings proposal to specify the new value for compute_allocation. The semantics of this field from the interface spec is as follows:

compute_allocation (nat)

Must be a number between 0 and 100, inclusively. It indicates how much compute power should be guaranteed to this canister, expressed as a percentage of the maximum compute power that a single canister can allocate. If the IC cannot provide the requested allocation, for example because it is oversubscribed, the call will be rejected.

2 Likes