It’s a PHP server with a database and a file system, so yes it will be able to run Drupal.
Any rough timeline, potential blockers?
2026?
Should be good for Q1 2026. Blockers are the 5 and 40B cycles query execution limitation and 6Gb heap memory on wasm64. I’ll have a good idea of those limitations in the next weeks as I’m moving forward on Laravel.
Interesting product
First thoughts - wouldn’t this be significantly more expensive then traditional hosting?
Over the past week, things have progressed very quickly. We managed to run a very limited version of WordPress; however, given the current constraints of the Internet Computer (ICP)—notably the 5B and 40B instruction execution limits—we are no longer able to push the platform further for this use case.
It’s important to understand that WordPress typically loads between 100 and 500 files per request, which is not sustainable in the ICP environment. We quickly exceed the execution limits. While we were able to work around these limits for “normal” requests and for SQL query execution, the file loading, compilation, and execution phases remain a blocker. We are effectively looking at close to 200B instructions just to bootstrap WordPress.
We explored several approaches, including execution chunking, precompilation into opcodes, and running WordPress in the background with an ICP-based front end (which proved far too slow, with load times around 90 seconds). Unfortunately, none of these options led to a viable solution. Even Laravel fails to start, which highlights how little headroom we have for running this class of application on ICP.
At this point, we have to be honest: for WordPress (and similar frameworks) on ICP under the current limits, this is a failure. Hopefully, ICP will offer a cloud-style execution environment capable of supporting these workloads in the near future.
On the positive side, we successfully ran PHP in its entirety. It was a truly interesting project, and we learned a lot along the way. ![]()
We have PHP 8.5 fully running on ICP, we have WASQL, a full SQL server running on the same canister which is multi-user, transactional and can run passed 40B instructions limits. Like I said in the precedent post, problem is with loading and compiling 100 to 500 includes for one request!
It was brave of you to attempt that.
Sounds like you need to move where the compilation happens if that’s possible. Best be on the dev machine. Compilation on every request (or on install) sounds like something that will never be feasible.
This is a really interesting project. Trying to solve some of the roadblocks you hit could be a great forcing function to improve ICP generally.
If you don’t mind, we’ll be in touch soon, to get the full download, share some plans and discuss. Congrats and thanks for your efforts!
Of course, sounds great! Thanks for the kind words.
This is the way Dom!!
Too bad this didn’t work out!
I was thinking the other week that you could definitely build a powerful CMS system on ICP though. You could reuse a lot of WordPress’s client side code, i.e. their Gutenberg editor and block system. But instead of running PHP, mostly use a server less approach where the theme has a typescript API for fetching the pages, blocks, menus etc. from the canister. In other words, render pages client side rather than server side. This would also be a snappier and more modern experience than Wordpress. But you do lose of course the benefits of just being wordpress.
I might take a stab at that later this year, but think there are some infrastructure pieces needed first
My hat goes of to you for the work you’ve been doing. I’ve been very excited about this initiative. It has huge potential.
My understanding is that this is still within the realms of possibility with deterministic time slicing, although only in special circumstances (a constraint that could perhaps be revisited and lifted?)
some special tasks, like code installation, can even go up to 200 billion instructions. This is achieved using Deterministic Time Slicing (DTS)
Maybe running WordPress on the IC should be considered a special task (maybe even on a special subnet). That would be cool
You could definitely modify the PHP interpreter to occasionally check the instruction counter. If it’s close to the limit, store the state and schedule a timer that continues the execution in a new ICP message. I did a prototype of that with wasmi, which has built in instruction counting and continuation. Downside is you can’t get a response in a single HTTP request anymore, but you could probably piece things together in a Service Worker.
At that point latency is so high though that it’s definitely more of a fun proof of concept than a usable product, even if you only use this for update requests (since query calls only use a single node anyways, you might as well serve reads from AWS, I guess?). Also, you would need a way to know which Wordpress requests are allowed to modify the database ahead of time, so you can use update calls for these. This is impossible to know with all the wordpress plugins you might install, but I guess a simple allowlist of URIs would do the job in practice.
The great thing about deterministic time slicing is
DTS is automatic and transparent to smart contracts, so developers don’t need to write any special code to use it
Query calls retrieve data from a single node as an optional (opt out) optimisation because the certified data feature can be used to confirm the data has not been tampered with (faithful to the consensus that already occurred when the update took place). A world computer away from serving reads from AWS ![]()
I think the fact that deterministic time slicing can transparently handle 200 billion instructions is promising (currently only during canister upgrades). Maybe latency would still be too much of an issue though due to the 1 or 2 second block rate.
It may be that operating from a generic subnet is a dead end. However subnet configuration is extremely flexible. I’m willing to bet the NNS would be happy to adopt reconfiguring a subnet or two to specialise in these sorts of workloads.
Parameters worth noting:
I think it’s this sort of situation that subnet configuration parameters exist for. @miadey, have you considered tweaking these locally? I’m not aware of an out-of-the-box way of doing this in a dev environment, and/or whether or not the findings would be indicative of performance on mainnet (assuming such a subnet were configured), but this seems like a potential avenue forward that could be explored with DFINITY’s assistance.
Query calls retrieve data from a single node as an optional (opt out) optimisation because the certified data feature can be used to confirm the data has not been tampered with (faithful to the consensus that already occurred when the update took place). A world computer away from serving reads from AWS
My understanding is a query call is not certified by default. You need to first have an update call that certifies that data, see here. This is only feasible for static content or in this case maybe caching pages, but that’s very different from running a PHP interpreter that generates the page.
Intuitively, it makes a lot of sense that if you have a Wordpress plugin that outputs the current time, that could never be done in a certified manner by a single node. Arguably, if you already have certified data you might as well serve that from a low latency CDN like Cloudflare, you’ll still be able to verify the integrity on the client side
I’m having similar problems in the asset canister and am playing around with a similar approach in this PR. Maybe you can get some inspiration from it? Essentially when the canister is close to the instruction limit of a single message it will make a bogus async self-call (and ignore the result). Since the canister made an inter canister call the response processing is in a new message and the instruction limit is reset.
In this PR I’m writing it in a bit of a convoluted way since we like to have a non-async core for easier testing, but if you don’t mind async everywhere you could sprinkle maybe_reset_instruction_limit().await in a bunch of places which would start a new message if you are close to the limit
Edit: Nevermind, I tested this and @Vivienne is absolutely right that awaiting a simple self-call is enough to reset the instructions limit. I thought the limit is across multiple messages for the same canister method for some reason. Great news in my book ![]()
Problem is the PHP interpreter needs to have some async execution support still, because on the low level it needs to finish executing the current WASM method and continue only after a callback.
Ah, I just saw your edit @marceljuenemann. Feel free to ignore this message, but I’ll still post it since there’s some useful info in here…
This is correct, and the documentation for PerformanceCounterType is most explicit about the difference IMO.
This is where you misunderstood something. The limit is on message execution and not call context (read: perfomance_counter(0), not 1). If you look at the file with the limits you cannot find any constants that mention call contexts.
Call contexts are allowed to exist infinitely long, which can be a big issue when it prevents canister upgrades. See e.g. the security recommendation here.
No, this is correct. The call context exists until the canister produces a response
Call contexts are relatively cheap, and easy to keep around. This is why it is ok to have them exist for long. A message execution is a lot more complex (I don’t know the details, but if you’re interested I can ping someone to explain it properly). The big thing about a message is that while it’s in progress it can barely be paused enough to do DTS (execute a message over multiple rounds), but it cannot persist across a checkpoint (which happens every 500? rounds) because some stuff is just too hard to serialize. If a checkpoint comes while a DTS message is running the message gets aborted and restarted after the checkpoint. Therefore the message-with-DTS limit is chosen such that it is not too likely that the message needs to be restarted
I’m not a Wordpress user, but I’d expect it to have some support for client-side rendering. Headless Wordpress seems to be a thing. If parts of a page are rendered separately then the stuff that can be statically served could be served separately from the stuff that can only be served from update calls. Just thinking out loud.
This is my understanding too, but it’s there if it’s needed, or you can opt in to replicated queries (which obviously takes a performance hit).
Would anyone from DFINITY be able to comment on the feasibility of a specialised subnet for workloads that demand larger blocks and more instructions per round?
In other words, are subnets already operating at the limits of what are possible with regards to
Could performance be improved by establishing a smaller subnet comprised of only highly trusted node providers (represented by notable companies as opposed to individuals, e.g. DFINITY, Sygnum Bank, Exaion, etc.).?
Again, just thinking out loud. I’d really like to see this initiative unblocked.