Questions on Technical Restraints

:megaphone: An Honest Ask: Dfinity, Please Address These Roadblocks with Real Transparency

Hello team,

This post is not meant to provoke, troll, or spread FUD. I’m posting this in the spirit of Dfinity’s own stated values: transparency, decentralization, and technical clarity.

I’m a mid-level developer who wants to build on the Internet Computer. I respect the ambition behind Motoko and the IC — but I’m also seeing a growing pattern: real developers hit a wall, go quiet, or silently migrate off-chain.

This post outlines real, observable roadblocks that make it extremely difficult to ship full-scale, user-facing apps on the Internet Computer. I’m asking Dfinity staff to factually and publicly address these concerns — not with vague roadmaps or marketing boilerplate, but with real technical clarity.


:red_exclamation_mark: The Ask:

Please directly acknowledge and explain how Dfinity is addressing (or not addressing) the following core development and deployment roadblocks.

Each of these has been encountered repeatedly by active developers — publicly and behind the scenes.


:construction: Core Technical Constraints


:brick: Canister Size Limitations

  • Wasm binaries are capped at 4MB for installation, with expanded limits (~300MB) using chunking — still insufficient for large applications.
  • Devs must manually split logic across multiple canisters and orchestrate them.

Why this matters: Complex apps (AI, rich media, composable logic) don’t fit into one canister. Splitting logic requires non-trivial architecture and increases dev overhead.


:cross_mark: No Native HTTP Fetch in Motoko

  • Motoko canisters cannot make HTTP requests directly.
  • The System API’s http_request pattern is limited to certified subnets, rate-limited, non-deterministic, and paid.

Why this matters: Fetching external data or APIs (auth, third-party services, analytics) is essential for modern apps. Without this, you’re locked in a silo.


:floppy_disk: Stable Memory is Manual

  • No automatic persistence between upgrades.
  • Developers must explicitly handle preupgrade/postupgrade logic and define Stable types.
  • Schema evolution and rollback are left entirely to the dev.

Why this matters: Entry-level devs lose their data. Experienced devs waste hours writing backup logic for every schema update.


:turtle: Update Calls Are Slow

  • Query calls are fast but read-only.
  • All writes go through update calls, which take 2–5+ seconds.

Why this matters: Users perceive lag. It breaks the illusion of responsiveness. Real-time UX is nearly impossible without heavy caching or optimistic UI hacks.


:package: No Built-In Query Layer

  • No SQL or key-value index support.
  • All filters, sorts, and pagination must be implemented manually in memory.

Why this matters: Apps with searchable content, feeds, or filters become bottlenecked by poor performance and bloated logic.


:warning: Cycles Management

  • When a canister runs out of cycles, it silently dies.
  • No native metering or alerting system exists.
  • Developers must build their own gas management tools.

Why this matters: Abandoned apps might just be out of fuel. It’s a bad look and a worse experience.


:broken_heart: UX Breakdown

  1. Wallet UX is poor (Internet Identity lacks mobile support and has confusing recovery flows).
  2. No persistent sessions (refresh = logout for many dapps).
  3. Canister-hosted frontends are slow , with no CDN acceleration.
  4. No local caching or state management beyond what you custom-build.

:magnifying_glass_tilted_left: Real-World Examples

  • OpenChat: Strong vision, but suffers from sluggish updates, file upload limits, and slow iteration.
  • DSCVR: Began fully on-chain, now uses off-chain infra for indexing and media storage.
  • IC Drive, InfinitySwap: Shifted to hybrid models for performance and usability reasons.
  • Taggr, ICNS: Stalled due to architectural and performance issues.

This is not speculative. These are observable shifts in architecture, repo commits, and user feedback.


:white_check_mark: What We’d Like from Dfinity

  • A clear, public status report addressing these challenges.
  • Direct answers: Are these limitations intentional, temporary, or avoidable?
  • Transparent tradeoffs: What is realistic on-chain and what isn’t?
  • Honest timelines or reasons why they aren’t on the roadmap.

:folded_hands: Final Note

This is not a complaint thread. I’m not here to argue with power-users, mods, or defenders. I’m here to ask for clarity from the team building the protocol.

Please don’t derail this post with:

  • “You’re not using it right.”
  • “You just don’t understand the paradigm.”
  • “Go build it yourself.”

These are the real limitations hundreds of developers have encountered. They are holding the platform back from delivering on its promise.

If the Internet Computer is truly revolutionary, then let’s be honest about where it is right now — and how it’s evolving.

Thank you for reading,
LostInTheCanister
AKA Clippy von Bytebite

1 Like

Most of those concerns can be addressed in the future, although you’re right, these technical constrains make it difficult to integrate with Web2 or other chains.

Canister size limitations are being fixed with 64-bit WASM

As far as I know orthogonal persistence should fix the stable memory and persistence between upgrades, this is very much needed as it is a pain-in-the-butt to do them manually. Still there’s just no easy way to do it, or is it?

Update calls are not that slow, depends on the subnet and network traffic but when you do inter-canister calls it becomes a UX nightmare with the waiting times.

CycleOps is great for Cycles Management, you can even get email notifications, but I feel you… there’s just no built-in solution for this, you have to give control to the canister either to CycleOps or to the SNS/NNS for this.

Most of my work is UX and I agree with you that Internet Identity is flawed by design, not only you have to wait to retrieve your keys on-chain and you do not own your keys (the subnet owns your keys) but also the mess of having a different principal/accountid for each app.

Asset canisters are slow because speed is capped, this makes it awful for apps that require CDN acceleration, all projects working on data streaming have failed and buried on the graveyard, or being called out for using Web2 solutions.

This is not complaining, this is a real conversation we should having to figure it out.

1 Like

What are you actually asking for? The IC has made absolutely huge strides in all of those areas and continues to do so. Why don’t you take a look at the IC repo and start tracking what’s being worked on?

Please directly acknowledge and explain how Dfinity is addressing (or not addressing) the following core development and deployment roadblocks.

I thought my ask was pretty straightforward, however, just in case I need to reiterate the ask again. I’d also like the response to come from Dfinity, or a core protocol developer.

Please do not start poking at things that may make this post turn into…

2 Likes

Thank you for the reply, and yes I won’t be nor do I expect any huge commercial use applications to be built using the ICP blockchain unless these all are addressed and fixed.

This isn’t just something devs. can idly sit back and sift through repos to “keep an eye on” and track progress nor is this something that’s communicated effectively. So far, these limitations are not just little road blocks that we have to wait and see. These are huge protocol design flaws that IMO may never actually hit a resolution… How is anyone okay starting to build with a “trust me bro” mentality… I’m not poking at, or trying to spread and start FUD or at anyone person directly. However, it feels like that’s how they want us to develop. Just based on “trust me bro” when most projects haven’t even produced because of these constraints.

Then add how much money and time we spend on these projects? To be met with radio silence and to be told to trust them they are making it happen. That’s not going to happen. I do not trust you bros.

man I don’t know what you’re developing with IC recently or not but

cycleops fix it

I’m not sure what type of key value index support but I think StableBTree serve very solid as a noSQL database right now. But yes it is still slow for certain task

For me Internet Identity is not a wallet, but Oisy wallet does.

I think this is dapp issue not Internet identity

2 Likes

An email is not a fix. That’s not meant to poke fun at you or try to shut you down. I appreciate the information truly. However, that’s not at all a viable long term solution lol Here let me check my email every second of the day and never Miss a beat so my cycles don’t drain without me reading the email…

I can’t be waiting around for them to speed up to web speed as was promised.

The internet identity itself is a beast of an implementation. Mobile support at the very least…

I’m not sure what mobile support you need, I used it with my phone weekly, still seems it normal for logging to other dapp

1 Like

That’s fantastic for you. I remain firm in my OP’s questions and will choose to listen to crickets chirp or wait for the protocol dev.team to engage.

Thanks for trying!

These questions are not for the community to respond and “defend the ICP blockchain” these are questions I expect straightforward answers from staff.

Again, thank you for your time and input.

1 Like

Do you mind me asking how long you’ve been around this ecosystem for? It’s hard to tell as you’ve made your profile private. This is your only ever post. You may get better luck in terms of responses if you ask specific questions, requesting help with regards to the best way to tackle a specific thing. As it stands your post looks a lot more like a FUD piece than it needs to.

Almost everything you wrote about has seen significant improvements over time, and continues to see significant improvements. The IC is already miles ahead of other networks in terms of many of the features you’ve mentioned, and it just keeps getting better.

Have you come across the Stable Structures library? There’s lots of other stuff taking place in terms of storage improvements, usability and developer experience, etc.

You can follow all these developments and keep track of some of the things that are coming by tuning into the monthly Global R&D Updates.

Rome wasn’t built in a day.

1 Like

Do you mind me asking how long you’ve been around this ecosystem for?

I don’t mind. However, I’m reserving my right to privacy, and will simply say that based on the content of my post, it should be clear I’m not new to the ecosystem. I’m not comfortable sharing identifying details — not because I have something to hide, but because I believe credibility should rest on the merit of ideas, not personal exposure.

If the forum moderators eventually review my post (which was hidden without any clear justification), you’ll notice I directly addressed many of the FUD-related responses I anticipated would appear. I genuinely admire and respect your thoughtful commitment to the IC. That said, as mentioned earlier, I’m not seeking commentary on whether my post “feels like” FUD — or any other label often used to deflect from valid concerns.

I’ve been forthright about what I’m asking. While I appreciate links to outside resources — and acknowledge that some may be helpful — redirecting the discussion to third-party content doesn’t speak to the protocol-level concerns being raised here.


On tone :

I want to take a moment to reflect on the tone of your reply — not to stir conflict, but to highlight how language shapes dialogue. The tone of your response feels defensive, mildly condescending, and subtly gatekeeping, though couched in polite phrasing. It’s not hostile — but it’s worth examining:

  • The question about my time in the ecosystem comes across more as an attempt to verify or challenge credibility than as an invitation to mutual understanding.
  • Describing my post as “FUD” rather than engaging with the substance subtly shifts the conversation away from valid concerns and toward discrediting the tone.
  • Stating that “the IC is already miles ahead” presents a subjective perspective as an unquestionable fact, leaving little room for critical or alternative viewpoints.

This kind of tone — even when unintentional — can make it difficult for new developers or critical thinkers to raise concerns in good faith. It signals that unless you align with the dominant optimism, you risk being dismissed or sidelined.

And that brings up an important point:

If newcomers, critics, and skeptics are expected to tiptoe around forum rules and risk content being hidden for asking hard questions, then those defending the protocol should be equally accountable to the same moderation standards.

Gatekeeping dressed as helpfulness is still gatekeeping.

Dismissing concern as tone is still dismissal.

And subtle rhetorical undermining deserves as much scrutiny as overt “FUD.”

As it stands your post looks a lot more like a FUD piece than it needs to.

That’s a fair observation, and I’d genuinely ask: what’s the intent behind framing it that way? If the goal is to focus on tone over substance, I’d suggest that approach limits dialogue more than it protects it. This is a pattern that shows up a lot — and I think the community would benefit from moving beyond it.

I’m sincerely curious why the types of changes being requested here can’t be made at the protocol level directly.


Almost everything you wrote about has seen significant improvements over time, and continues to see significant improvements. The IC is already miles ahead of other networks in terms of many of the features you’ve mentioned, and it just keeps getting better.

I hear you — and I don’t deny that progress has been made. Still, I’d gently note that this framing is a subjective interpretation presented as if it were objective truth. It’s not that the IC hasn’t evolved — it has — but that doesn’t automatically mean it’s currently prepared to meet enterprise deployment standards at scale without substantial friction.

I’m not interested in bandaid solutions, duct tape maneuvers, or eleven different theoretical workarounds.

What I’m asking for is clear, foundational evidence — and direct, on-record communication from those in a position to speak on behalf of the protocol. Right now, we don’t see full-scale commercial or enterprise-grade apps on the IC. However powerful the vision may be, if the protocol can’t support what real-world enterprises require today — at scale, with predictability — then I respectfully disagree with any implication that the IC is “ready.”

And I’d encourage defenders of the protocol to step into the shoes of developers tasked with convincing a CTO, a DevSecOps lead, or a Fortune 500 procurement team to commit resources to this platform. Not eventually. Not after another SDK refactor. Not “theoretically.” Now.

It has to work.

It has to scale.

And it has to be worth the investment.

Otherwise — and this is just the honest truth — what remains is, by definition, Fear, Uncertainty, and Doubt. Not manufactured by critics. Not posted maliciously. Just the natural and reasonable conclusion an outside team would draw from the current state of things.

If that makes people uncomfortable, that’s not an attack. It’s just a call for clarity. And if this kind of feedback keeps being brushed off, I may begin to question whether some of the most enthusiastic defenders — however well-versed in IC’s technical landscape — fully grasp the non-technical pressures and real-world metrics enterprise stakeholders operate under.

1 Like

Ready for what? It’s only a few years old, and it has a long path of further improvements and optimisations ahead of it. Nobody is saying that isn’t the case - but it’s way ahead of the competition. In my opinion, it’s still not clear what you’re actually asking.

image

It’s extremely easy to spot LLM generated content. I’d encourage you to label your posts as such in the future, for the following reasons →

1 Like

Hey @LostInTheCanister :waving_hand:

I’ve been developing since genesis on the Internet Computer, and have built several apps using the Motoko programming language.

I don’t work for DFINITY, but one of my core goals is to bring the ICP developer experience up to par with what web2/cloud developers expect today. The ICP developer experience isn’t there yet, but it’s come a long way from where it was 4 years ago. To that note, hopefully I can help answer some of your questions, or point you in the right direction.

Wasm binaries are actually currently capped at 100MiB, which is pretty decent size for most large applications. This is just the size of the wasm uploaded, and not any state. I’m curious where you got the 4MB/300MB numbers from, and what made you think that you needed to split logic across multiple canisters due to size limits. That may have been necessary previously, but it is no longer the case today.

From 2021-2022, canisters were limited to 4GiB of heap memory, and 8GiB in total. In fact, due to this limitation I built a tool called CanDB that aimed to scale applications with large data through sharding and auto-scaling.

Then from 2022-present DFINITY engineers did excellent work to expand the size of canister stable memory to 500 GiB, and now with the introduction of Wasm64 there is a lot of effort from the foundation to increase the canister heap memory size in the near future. In fact, they’ve already increased the heap size for canisters using Wasm64 to 6GiB.

Have you tried using stable data structures? They handle the data serialization upgrade logic for you during upgrades, so you don’t need to worry about that. I’ve written a few stable data structures myself, such as a heap based btree. I’d highly recommend trying out the new Motoko core library, developed by the Motoko team, where all of the data structures that ship with it are stable :clinking_beer_mugs:

And there’s a new migration syntax since moc 0.14.0 that’s typesafe and pretty convenient - here’s an example of how to use that.

I believe there have been several who have gotten sqlite running on the Internet Computer. Regardless, you’ll still run into performance issues if you don’t design your tables and indices well. Since the Internet Computer gives developers code and state together and does have some application resource limits, it’s much more suitable for a NoSQL key/value store paradigm.

To that note, several developers have been building out BTree backed data stores, such as the Mimic library for Rust canisters, which is probably the most fully featured of them all so far and includes filtering/indexes. On the Motoko side, the BTree library that I mentioned previously has scan support for pagination purposes. Since a BTree, a sorted, balanced data structure is already exists, anyone can implement a library on top of this with indices and and fitering – it isn’t too big a lift, and I’m sure the grants program would fund it.

I’ll preface this next response – I’m one of the co-founders and developers of CycleOps :sweat_smile:

After building several apps and tools in the ecosystem, like you I found that while the reverse gas model provides an awesome UX, it shifts the burden of cost management to developers, which can be worrisome. It’s exactly why we built CycleOps for developers.

You’re 100% right that every app needs to worry about cycles, just like every significant application deployed to AWS, Vercel, or the centralized cloud needs to pay for its compute and hosting.

Firstly, the Internet Computer provides a freezing threshold that protects canisters from completely running out of cycles and being deleted from the network. By default, this is set to 30 days. Once a canister hits its freezing threshold, it is unresponsive to incoming requests. You can think of this like AWS shutting down your APIs but preserving your data until you pay your bill. The default freezing threshold is 2_592_000 seconds, or 30 days. I wouldn’t recommend setting a freezing threshold lower than 2_592_000 seconds, but if you want a larger buffer, just bump your freezing threshold higher.

CycleOps aims to help developer teams from downtime and ever hitting a freezing threshold. We provide automated cycle topups, proactive email alerting (you don’t need to worry or check in until you get an email), and monitoring for a complete set of metrics (memory, reserved cycles, etc.) as well as historical analytics. Set a topup threshold above your freezing threshold, a low ICP account balance alert notification (reminding you when you need to send more ICP to fund compute), and you’re set.

It’s a trustless, no code, no deployment integration that takes less than 10 minutes. I might be biased, but its quite literally the easiest monitoring integration I’ve ever set up – web2 or otherwise, and at the end of the day you get a detailed view of all of your canisters and their metrics in one place.

Imagine having 800 canisters and trying to remember their ids, what their for, keep track of their cycles and what’s happening with their metrics :see_no_evil_monkey: – CycleOps does that all for you :tada:

And if email notifications are inconvenient or a pain point for you, let us know what a better place to be notified is, such as SMS, Discord, Slack, or OpenChat. We’re always looking for feedback on how we can improve the product. Give it a try and let me know what you think!



As an aside, it would be great to learn more about what brought you to ICP and the pain points you’ve faced on your developer journey so far. Feel free to reach into my DMs – happy to help anytime :slightly_smiling_face:

6 Likes

Lol — it wasn’t “generated” by A1 sauce. It was re-crafted for clarity, and yes, a good chunk of my nuanced critique was softened for the sake of productive dialogue. Funny how even that didn’t help.

Also, let’s get something straight: I’m not new here. But if you think I’m going to pour hours into forum posts only to be met with condescending remarks and baseless assumptions, you’ve got the wrong person. The IC space has grown hostile and defensive — a mess of ego, gatekeeping, and tone-policing. That’s the real toxicity here.

So yeah, I’m letting “A1 sauce” do the heavy lifting now. I’ll save my eye strain for real work — like maintaining codebases, not battling forum politics.

And since we’re talking about labeling, I’d love it if your posts came with a disclosure too: “Posted by Resident Forum Fixture — prone to unsolicited corrections, frequent derailments, and a side of smug.” At least then the rest of us could pane you off to the side, just like you want with AI-generated content.

Lastly — and I shouldn’t have to say this — if someone explicitly says they don’t want your commentary on their post, don’t reply anyway. That’s not “community building.” That’s entitlement. You’re not contributing to the solution — you’re reinforcing the problem.

1 Like

Hey, so I want to reply to this. However, I’m not going to spend the time that @Lorimer requires in order to get past gate number one.

My posts will continue to be flagged… more than likely until a censor Demi god deems it worthy enough for the forum.

I’d love to engage with your reply, however, I’m just not going to spend hours carefully crafting the ever most perfect response that fits the needs of those who require pristine forum posts in order to express their opinions lol

It should be noted right off the bat that again I was not seeking a response from teams who have as much skin in the game. Your opinion is naturally biased, and my critique was at some of your developments no matter how nice they may be. It’s all “pillow talk”, and no “get up in the morning and do it talk”.

Here, let me have you personally, go to any CEO of a company and say hey here is this amazing beta, or the start of a codebase, as soon as ICP grows up and the community finds a way to fix the underlying protocol level issues I can ship your product!

Tell me the reaction… I won’t be waiting here to find out lol

It has grown yes, but, it’s taken me 4 years to get to a point where I realize this who “fully on-chain” facade is just that. It’s a few group of people pushing a narrative that simply isn’t true.

It doesn’t matter if you combine the best of all A1 sauce and make one and call it Caffeine AI… If these fundamental issues persist and aren’t fixed on a protocol level say goodbye to 85% of devs past simple commits. and if that’s what the IC wants then market it as that. Don’t hold false advertisement and poor marketing campaigns that are dangerously close to “Pepsi where’s my jet” style promises.

You’re telling people you sell Tesla and really it’s the prototype google smart-car with all the bells and whistles still intact and you tell the customer just move all these wires and scrunch in it’s fine I promise…

If this is the case then we truly need an option to force dissolve Neurons with a slash penalty fee. This way the people who staked for 8 years and have been here since before launch can gtfoh once they realize they can’t build enterprise level applications… Yet, maybe never… We will see…

Like what?..

I’m an IC and can’t speak for DFINITY as a whole, but let me try to address some of your concerns. Please also look at the Roadmap since a lot of future plans are laid out there.

Code size limits have grown a fair bit, and stable memory gives enough storage for many applications. Further improvements are certainly desired and necessary over the long term and also planned, but I have not heard of many urgent size limitations recently

Not sure I fully understood your concern here. Motoko canisters can make HTTP outcalls just like any other canisters. Obviously, since a subnet has limited capacity, those have to be rate-limited and every piece of computation costs, so I don’t think there is too much that can be done besides optimizing parameters. I do not understand what ‘certified subnets’ refers to, and outcalls need to produce deterministic responses, otherwise there is no way to perform consensus over the result. That is a foundational limit of blockchains in general

Have a look at Motoko’s enhanced orthogonal persistence that went live recently. It addresses a lot of that. Of course schema migrations need to be handled by business logic in many cases, but AI tooling should make it a lot easier. Also, I expect the Rust story to keep improving, both from the foundation and other ecosystem projects

Consensus has some mathematical limits, otherwise you can’t guarantee finality anymore. IIRC consensus is unachievable within <3 round trips, and doing that across the globe gives a theoretical lower limit of 0.3(?) seconds per block. Block rate recently was roughly doubled to 2 blocks per second at low subnet load, and there are still expected to improve a little bit. What can improve here is mostly stability of block rates under heavier load. Also not considered super urgent right now, but there’s always work going on in that direction.

Both already covered nicely by @icme, nothing to add here

Wallet UX is improving, both with Oisy and standards that are still getting created left and right. I’ve lost the overview with that since I’m not involved enough, but there are a lot of people on this. Persistence of sessions is mostly a matter of configuration and up to the devs. Canister-hosted frontends will keep getting faster with better boundary node caching, and slowly improving routing of requests. Boundary nodes also recently started allowing cache headers, so local caching is quite accessible by now. Not popular yet, but finally doable

I agree there is still plenty of room for improvement, but IMO that already puts us far ahead of most other chains. Not much I can comment here since this mostly relies on the ecosystem and I don’t have too much visibility into that

The roadmap serves for a lot of that. It is not as detailed as you (and me) would like it to be, but that is because there’s too many things going on in parallel and therefore only larger, concrete-ish things are listed. We’ve tried to keep it more detailed in the past and always ended up with stale data.
I think I pointed out the places where there is not much room for improvement. People at DFINITY are pretty aware of most of these limitations, but there are a lot of other things to do besides these limitations.

I’d love to go into a lot more detail on most issues, but you touched many topics across the whole stack, and specialties of 50+ people. If you would like some more detailed info on a few more focused pieces I’m happy to go into more detail or call in an expert or two

7 Likes

To be fully transparent — 1000% of my projected frustration, skepticism, and (admittedly) occasional blind rage stems from having to engage with:

This response from you was by far the most illuminating and appreciated. While I may not be “happy” with the answers, the honesty and internal transparency are precisely what I came here for. That was the objective. Thank you.

What I’ve been developing over the past few years is not something I’m going to deploy anywhere without a deep vetting process. One might argue, it’s something the Internet Computer should have done, and because they didn’t look at it now (too soon? lol) — one that includes my own stress-testing, long-term planning, and sustainability analysis. I’m not just looking at the Internet Computer. This is one of several platforms under review.

I’m evaluating all options based on data, ecosystem maturity, and developer experience. That’s the only sensible workflow when building anything at scale — and it’s what any enterprise-grade application team would do.

That being said, I now see why the Internet Computer is struggling to keep pace with other chains that adopted different marketing strategies. One could speculate that those strategies were more effective — not necessarily more truthful, but certainly more compelling to developers under pressure to ship.

However, I digress…

In either case, with this clearly being a reality of overpromised and underdelivered product development, should there not be a formal way to “leave early” — even with a penalty fee — simply because the platform was too young to be making such bold claims in the first place? Again, I digress…

At this point, I’m left with two reasonably honest options:

• Pack up and leave the ICP ecosystem entirely, or

• Relegate my time here to small novelty or hobby projects for fun, not for scale.

Either way, I’ll be honest — I’m significantly disappointed.

That said, if I do come back to build toy projects for fun, I’ll more than likely be on the lookout for an expert or two to collaborate with on technical issues. Just know that if I return, I’ll be treating this place like a sandbox — not an infrastructure bedrock.

This is a huge reason why I don’t keep any tokens anywhere outside of the NNS. The issue is that at any time, the canister can just be frozen and then eventually deleted, taking tokens with it. I think we need recovery mechanisms in place to save or redirect tokens on dapps that are garbage collected. This alone adds so much risk that prevents big money from coming in, IMO, because ICP-based banks and DEXs can just get taken offline and then a month later it and all customer money is gone.

Sure, CycleOps. However what if there is no more top-up tokens? Why should I lose my tokens? I think a logic-based event should trigger when cycles run out. Specifically, the cycles used for the logic IN the event should be calculated, and then when the canister reaches that calculated total, it executes the last of its cycles on that event. Such a thing would include sending customer tokens to a safer wallet of their choice.

That’s my main technical limitation. OP brought up others and I don’t have much to add for those conversations.

1 Like

You tell me, I have an entire NFT collection that was frozen and then uninstalled, I was supposed to be checking the cycles each month and I’m not even a controller to be able to add it on CycleOps.

I thought that I had a year before the canister would be deleted but no, since we have tons of garbage/disposable canisters around all subnets, my best guess is that it’s not feasible to let them run out of cycles for too long.

Maybe @Severin can help us understand better this situation.

ICP is closer to typical cloud providers instead of e.g. ETH where everything is persisted forever. IMO you should frame the expectation similar to when you save your credit card on e.g. AWS. The may send you an email that the expiry date is coming up, and once the card cannot be charged at some point you will have your services turned off, with data retention according to the contract (probably all gone).

With CycleOps you get custom email notifications, and with the freezing threshold you can define yourself how much brownout time you want before data is deleted.

Now, with other people’s projects, you are pretty much at the same state as with any other web2 service (with the exception that before it is deleted anyone can pay for its services). If you have some web2-like NFTs (say CS:GO skins) and Valve decides CS:GO is over and stops paying for the server, then your NFT is gone. And any project that is supposed to be ‘eternal’ then must find a way to generate enough revenue that it can stay live.

I don’t mean this all as “you’re holding it wrong” but more like “this may not be the offer you were thinking of, the fine print matters a lot.”

1 Like