Realtime Cloud Computing on the Internet Computer?

When I look at the future of computing, I think the way hardware and software interacts with eachother will drastically change in the next decades. It is assumed that the actual computation of software (e.g. Games, Rendering, Calculation) will not take place on local decives (e.g. PCs, Smartphones, AR Glasses) anymore. I think future consumer hardware will merely be transmitters/receivers to cloud computers, rather than computational units with processors.

Let me give you an example:
Dominic plays an online multiplayer game.
When he presses a button on his controller, the input gets send out to a cloud computer which calculates the resulting actions of his input.
The cloud computer then sends out something similar to a video stream to his monitor showing the results of his actions.
No computations took place on Dominics gaming console, because his console is just a transmitter/receiver.

In order for this technology to work, we need faster internet right now. And I think we are very close to accomplishing it.
Here is a short article about some japanese researchers accomplishing a transmission speed of 1.02 petabit/s…

Now I’m far away from an expert on this subject and please correct me if I’m wrong, but the Internet Computer could not do something like that? The IC is great to enhance the current status quo and offers many new possibilities, but as far as I can see it realtime cloud computing of huge data is not possible.

Big Tech companies will advance in this sector in the next decades. My fear is that the arising cloud computing data centers will make the IC obsolete.

What does Dfinity think about this? Is the IC’s software and hardware advancing in this direction?

(I am fully aware that IC basically is a computational unit for cloud computing. I am talking about the capability of computing and streaming things like online multiplayer games in real time without lag, when the transmission speed becomes better in the future.)

3 Likes

These vastly improved internet speeds are hardware based and I don’t see why independent data centers would not upgrade their networks to stay competitive with big cloud. On the IC part, they can continue to r&D and optimize the software to take advantage of the faster. I think the internet computer will be radically different in 10 years from now.

3 Likes

I think in general IC needs compute subnets that can delegate their nodes to serve high speed requests. Nodes in these subnets wont go through consensus, it will just be a decentralized computation network like Phala Network, Aleph.im, Livepeer, ect…

If that happens then I think IC will be able to provide realtime cloud computing as well.

5 Likes

I was thinking about this a few days ago. This sort of servers can compete directly with other decentralized storage and AWS itself.

If we ever want to truly challenge the big cloud, this has to be part of our long term road map.

Let’s do it.

3 Likes

That would be interesting but other similar projects which don’t require to rebuild your entire tech stack would have more adoption imo.

Maybe some IC equivalent of rollups?

What do you mean by required to rebuild your entire tech stack?

You have to code your backend the IC way, you can’t just take your existing app in node.js and run it on the IC, other services allow users to run docker containers on a decentralized cloud.

I remember reading a post months ago by a Dfinity dev hinting about something similar to what you were suggesting, but unfortunately I can’t find it again.

Oh I wasn’t even talking about a whole decentralized cloud, just a service that does computation. Any canister can make a special request for some work to be done and IC would delegate that work to a node in the compute subnet and return/stream the result to the calling canister. Compute subnets nodes probably would have less storage but higher spec CPUs and GPUs.

That’s what the post described more or less, iirc the dev said in future we’d be able to choose the desired replication factor for each canister, which on one side is pretty cool but it could also break some assumptions, e.g right now all IC users expect a certain level of decentralization from dApps, which could be lowered by adding such an option.

1 Like

It depends on the dAPPS. A social network or gaming dAPPS wouldn’t need as many replications, whereas financial dAPPS would. Right now I don’t think computation performance is an issue, but later on it’s something that the foundation and the community will have to think about.

There are two parts to “faster internet”: increased bandwidth (i.e. how much data can be transmitted per unit of time; what the OP linked to); and lower latency (i.e. how quickly you can send a message; or get a roundtrip response).

The former can probably be increased quite a bit, particularly when it comes to the IC. Current node providers are required to have 10 Gbps (gigabits per second) of (guaranteed?) bandwidth. You average data center has a lot more bandwidth than that, it’s just that these are not whole data centers, just racks rented in existing data centers. Problem is, that it’s not just about the data center’s available bandwidth. Currently all IC subnets are spread across at least 3 continents. Meaning that all communication needed for consensus (and particularly all the artifacts – ingress messages, signature shares, etc. – and blocks) need to make it across the Atlantic and/or Pacific. Where there’s a lot less bandwidth available than at the datacenter’s door, as it were. And a lot more traffic from other sources.

Which is also where latency comes in. There are some small things that can be improved (faster routers, full mesh local networks, etc.) but the fundamental limit is the speed of light. And given the fact that the ICs consensus algorithm (and pretty much any consensus algorithm) requires 2 or 3 network roundtrips (cross-ocean, in the ICs case) you can’t really have something like the OP mentions, where someone presses a button on their controller, the subnet takes some action and returns the outcome, within much less than one second given existing subnets.

You can of course put a whole subnet within a country or data center; or even have a single-replica subnet; to speed things up. (And there is no reason not to have such subnets in the future.) But then you are giving up a lot of the high availability and censorship resistance that the IC offers.

3 Likes

Has Dfinity considered partnering with Syntropy? Apparently their prioprietary routing technlogy can reduce latency by a good margin ,especially on long distances.

1 Like

Thing is, unless Syntropy figured out how to exceed the speed of light (they likely haven’t, they would have made a lot more money by now than they ever could by selling routing technology) you’ll still get on the order of 100-200 ms roundtrip times across the Atlantic or Pacific, because that’s how long it takes light to travel forward and back.

Considering that people are buying low-latency gaming monitors to shave off single digit milliseconds worth of latency; and that you need at least 2 roundtrips to reach consensus on an IC subnet; hundreds of milliseconds (whether 200 or 500 or 1000) are not going to convince anyone to play games in the cloud as described by the OP. Hundreds of milliseconds of latency is actually jarring for anyone, not just gaming enthusiasts.

2 Likes

Of course realtime computing on the IC in impossible until someone figures out quantum entaglement based connections, I asked cause as far as I know latency and bandwidth are more of a bottleneck than computing power to scale the IC, so if we could reduce roundtrips by 20% the whole network would benefit from it.

2 Likes

Realtime Cloud Computing on the Internet Computer. It’s a very good question. There is no doubt that many people are interested in this subject. Let us explain more about this subject. Cloud computing is a field of computer technology that provides a network service that can be on-demand, highly scalable, and extremely reliable. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.