I thought it was impressive - let’s hope that it will broaden the appeal of the IC to a wider user base as I think this is still a significant issue.
It’s aimed at non-devs and devs. But, even if in serious business and enterprise applications, you end up hand-coding a lot, I can still imagine something like this being useful for more sophisticated and relatively rapid prototyping.
From what I could see the demo apps were using React at the front end but I imagine all this stuff can be expanded over time.
There’s too much of a tendency to want instant perfection. That’s impossible!
Per Dom himself caffeine will go through an alpha, then beta and finally it will be release into the wild. So the final version could be different then what was presented.
I think the most important thing here for creating big and complex dApps with Caffeine is, like someone mentioned already, can you work together with it.
For example can you suggest architecture to use, like use canister per user approach or use that canister i wrote and don’t change it or store data in this format because i already know it scales in my case better and things like that.
If that’s possible i definitely creating couple big protects i have already in mind but don’t have time. Some of them has quite challenging parts but if Caffeine will write like 95% of code and i will write or suggest how to write the rest of it that means i will be cooking projects one after another.
That way you would be writing only hard and unique parts of it leaving rest of it to Caffeine.
One of the ideas is decentralized git repository for code. We really need that because trusting big players to keep your code from AI and stuff is a bit too much of a stretch.
Having decentralized git repository maybe even protected with vetKeys would work as a charm.
I wonder actually where Caffeine stores code it generates. Maybe with Caffeine we won’t need git repositories at all.
Level of conversation here is not for this forum imhi
In my understanding, Caffeine is more about that AI can deploy the backend instead of that AI can code the backend. You are right that are many models and services using those models that can code. But you still have to deploy manually.
It is about the integration with deployment. So that someone with only a phone, no terminal, can build and deploy.
Now there might be some web2 AI services, or there will be in the future, that have an integration that allows to deploy on AWS/Replit/Github/Cloudflare/etc or other infrastructure. That will be good to deploy for testing. It will be very hard or impossible to let AI alone upgrade a running backend with user data in it.
The distinguishing factor of Caffeine and the ICP/Motoko combination is that it allows just that. Upgrade a running backend with user data in it. Because Motoko’s typed stable data signature prevents data loss during upgrades. So even if your AI makes a mistake in an upgrade the AI can still recover the data later on in the next upgrade. You won’t have any such guarantees with the generic web2 system.
Again, imagine someone with only a phone starting a deployed service and continuously iterating on it over its lifetime.
Sidenote: Of course Caffeine can code and deploy the frontend, too. But there’s less of a difference to existing or future web2 equivalents. Because the data loss problem isn’t there.
Also note that Caffeine can swap out the model under the hood. So Caffeine’s model should always be as good as open-source models are.
Caffeine will definitely allow users to create some of the websites and apps that they have been wanting to create but I imagine one of the barriers that will stop them from creating some of their idea will be when their idea requires the implementation of APIs for other sites like google maps or APIs set up by other projects that could allow them to monetize. I’m sure this process could be made possible or a lot easier than it will be for someone with no coding experience. For instance if Caffeine ai is set up so that when a user gives it a prompt that will require a google maps api or other known API it could set up place holders in the code and give the user instructions to implement it or possibly even set up some sort of plugin store that will allow people to include custom plugins that could handle most of the API connections for users if their projects require them. It’s Not really an issue just an idea that could improve the user experience and possible apps that could be created with caffeine.
This plugin idea is fantastic, especially for creating pre-designed frontends. It’s a common limitation with most AI tools—they can generate frontends, but the design tends to be very basic. Imagine if we could query another canister that offers predefined designs, allowing users to preview all options in advance and simply select by name.
Ugh, that’s so Web2-like, but sure. App store of Caffeine apps. But tbh why not just tell it to make my own app instead of buying someone else’s?
If Caffeine takes ICP, USDT, BTC, ETH, and any other tokens to fund canisters, then this looks well for me as the apps made would have secure funding not coming out of DFINITY’s pockets. Problem is that would harm adoption as people realize that even in the reverse gas model, someone has to pay.
How many cycles does the AI cost to run an LLM and then build and manage an application? Is it affordable compared to Web2 AI tools + cryptocurrency fees from CEXs? Because that’s the bottom line plus hype to offset it. I love that we can build sovereign apps, but since the marketing is so bad (as centralized companies see this is a threat to their models), it’s unlikely to catch on in the short term as they try to maintain relevancy, keeping in mind that centralized platforms are likely faster due to owning more compute resources.
Like it or not, DFINITY is in the same playing field as AWS, Google, and Microsoft. As long as they keep improving, this becomes a battle between decentralized new models, and legacy centralized ones. I’m excited for Caffeine because on a personal level, I can make a website with just a prompt.
Look… NNS… neurons… AI… smart contract / canisters… You understand AI vision outside ICP correct? But theres no real Autonomous when you rely on big tech cloud… because cloud and serverless is an oxymoron. It does not exist. Its just someones computer at scale… You understand Bitcoin vs bank ledger right? Same thing for ICP except its focusing on decentralized cloud computing.
Just take what you know about Bitcoin and apply it to cloud computing. Now extrapolate, take BTC value prop and apply to computation.
By your arguments why use Bitcoin when you got bank ledgers?
Same thing… why use ICP when you got big tech infra.
Sure there are “vibe coding” platforms like caffeine, but caffeine streamlines building for decentralized cloud.
You are more than welcome to keep building on big tech infra… just like people or more then welcome to keep using tradfi.
How do “major AI companies” “just do it” when theres no platform to “just do it” and OpenAI “just doing it” using EVM and how long did it take for EVM? How long did it take for Big Tech to reach “serverless cloud oxymoron”?
OpenAI working with what they got with EVM and ID, maybe you can wait for the coolkids, lkke people waiting for the coolkids on EVM… meanwhile while old tech being adopted for basic ledger people pushing forward on next milestone which is decentralized computation.
EVM/AO doing decentralized compute via cache results on ledger… ICP doing realtime compute… Solve realtoke decentralized compute, you can already do cache results as well… just like Tesla focusing on vision vs other self driving cutting corners and just mixing Ai reasoning into radar (solve vision you already solved radar, because video game self driving already has “radar via coordiates in memory”)
That might be more of a presentation issue (what they chose to show) and not a Caffeine issue (what it does, how well, etc).
This is true. And they need time to reach a certain level before they say more. I understand that.
It’s easy for me to comment and complain from the sidelines… they will deliver eventually, like they always have done.
But the question remains: is it going to compete? I can’t tell at this point
Demo was impressive even if it was mostly on frontend and simple apps. That is part of the deal after all, because I don’t want something that can do cool backend stuff, just to have the generated apps look very low quality.
Other than that, I’m more convinced by the value proposition after the Demo than before. I assume it can do a lot of backend stuff. But mainly looking forward to how easy it would be to update the apps and seeing all of the positive benefits of owning my data.
The big advantage is Motoko and IC as a backend for Caffeine which ‘collapse code and data’ and ensure lossless upgrades. Easier for LLMs to code backends
Little push back. What would it take to solve the deployment and lossless upgradebility problem in web2?
Well deployment isn’t that big of challenge, in fact it might be even quicker on web2 than IC.
And lossless upgradebility requires some logic to sit between LLM generated code and storage. The big AI companies could even train there models at scale for this kind of thing, delivering a better user experience, cheaper resources etc.
I really hope Dfinity wins off course, but IC advantage is not unachievable in other ways…
Well so if there was something out there like (because Dom didn’t do anything with games…) “make a game like tetris but two players on one board” then I think it can compete. The AI would need the ability to work with a gaming mindset in addition to the small-business apps he mentioned.
Another thing: is it realy 5B new builders?
Refining a valuable app requires many iterations, design, clearly defined problem and a product mindset with user in mind etc.
Are average folks going to be able to chat their way to quality apps? I realy can’t tell, I hope so.
Or will the professionaly crafted apps win out in the end? Designed with carefull consideration of resources, computational complexity, costs, tradeoffs etc?