Realtime Cloud Computing on the Internet Computer?

When I look at the future of computing, I think the way hardware and software interacts with eachother will drastically change in the next decades. It is assumed that the actual computation of software (e.g. Games, Rendering, Calculation) will not take place on local decives (e.g. PCs, Smartphones, AR Glasses) anymore. I think future consumer hardware will merely be transmitters/receivers to cloud computers, rather than computational units with processors.

Let me give you an example:
Dominic plays an online multiplayer game.
When he presses a button on his controller, the input gets send out to a cloud computer which calculates the resulting actions of his input.
The cloud computer then sends out something similar to a video stream to his monitor showing the results of his actions.
No computations took place on Dominics gaming console, because his console is just a transmitter/receiver.

In order for this technology to work, we need faster internet right now. And I think we are very close to accomplishing it.
Here is a short article about some japanese researchers accomplishing a transmission speed of 1.02 petabit/s…

Now I’m far away from an expert on this subject and please correct me if I’m wrong, but the Internet Computer could not do something like that? The IC is great to enhance the current status quo and offers many new possibilities, but as far as I can see it realtime cloud computing of huge data is not possible.

Big Tech companies will advance in this sector in the next decades. My fear is that the arising cloud computing data centers will make the IC obsolete.

What does Dfinity think about this? Is the IC’s software and hardware advancing in this direction?

(I am fully aware that IC basically is a computational unit for cloud computing. I am talking about the capability of computing and streaming things like online multiplayer games in real time without lag, when the transmission speed becomes better in the future.)

8 Likes

These vastly improved internet speeds are hardware based and I don’t see why independent data centers would not upgrade their networks to stay competitive with big cloud. On the IC part, they can continue to r&D and optimize the software to take advantage of the faster. I think the internet computer will be radically different in 10 years from now.

4 Likes

I think in general IC needs compute subnets that can delegate their nodes to serve high speed requests. Nodes in these subnets wont go through consensus, it will just be a decentralized computation network like Phala Network, Aleph.im, Livepeer, ect…

If that happens then I think IC will be able to provide realtime cloud computing as well.

7 Likes

I was thinking about this a few days ago. This sort of servers can compete directly with other decentralized storage and AWS itself.

If we ever want to truly challenge the big cloud, this has to be part of our long term road map.

Let’s do it.

4 Likes

That would be interesting but other similar projects which don’t require to rebuild your entire tech stack would have more adoption imo.

Maybe some IC equivalent of rollups?

What do you mean by required to rebuild your entire tech stack?

You have to code your backend the IC way, you can’t just take your existing app in node.js and run it on the IC, other services allow users to run docker containers on a decentralized cloud.

I remember reading a post months ago by a Dfinity dev hinting about something similar to what you were suggesting, but unfortunately I can’t find it again.

1 Like

Oh I wasn’t even talking about a whole decentralized cloud, just a service that does computation. Any canister can make a special request for some work to be done and IC would delegate that work to a node in the compute subnet and return/stream the result to the calling canister. Compute subnets nodes probably would have less storage but higher spec CPUs and GPUs.

1 Like

That’s what the post described more or less, iirc the dev said in future we’d be able to choose the desired replication factor for each canister, which on one side is pretty cool but it could also break some assumptions, e.g right now all IC users expect a certain level of decentralization from dApps, which could be lowered by adding such an option.

1 Like

It depends on the dAPPS. A social network or gaming dAPPS wouldn’t need as many replications, whereas financial dAPPS would. Right now I don’t think computation performance is an issue, but later on it’s something that the foundation and the community will have to think about.

There are two parts to “faster internet”: increased bandwidth (i.e. how much data can be transmitted per unit of time; what the OP linked to); and lower latency (i.e. how quickly you can send a message; or get a roundtrip response).

The former can probably be increased quite a bit, particularly when it comes to the IC. Current node providers are required to have 10 Gbps (gigabits per second) of (guaranteed?) bandwidth. You average data center has a lot more bandwidth than that, it’s just that these are not whole data centers, just racks rented in existing data centers. Problem is, that it’s not just about the data center’s available bandwidth. Currently all IC subnets are spread across at least 3 continents. Meaning that all communication needed for consensus (and particularly all the artifacts – ingress messages, signature shares, etc. – and blocks) need to make it across the Atlantic and/or Pacific. Where there’s a lot less bandwidth available than at the datacenter’s door, as it were. And a lot more traffic from other sources.

Which is also where latency comes in. There are some small things that can be improved (faster routers, full mesh local networks, etc.) but the fundamental limit is the speed of light. And given the fact that the ICs consensus algorithm (and pretty much any consensus algorithm) requires 2 or 3 network roundtrips (cross-ocean, in the ICs case) you can’t really have something like the OP mentions, where someone presses a button on their controller, the subnet takes some action and returns the outcome, within much less than one second given existing subnets.

You can of course put a whole subnet within a country or data center; or even have a single-replica subnet; to speed things up. (And there is no reason not to have such subnets in the future.) But then you are giving up a lot of the high availability and censorship resistance that the IC offers.

4 Likes

Has Dfinity considered partnering with Syntropy? Apparently their prioprietary routing technlogy can reduce latency by a good margin ,especially on long distances.

3 Likes

Thing is, unless Syntropy figured out how to exceed the speed of light (they likely haven’t, they would have made a lot more money by now than they ever could by selling routing technology) you’ll still get on the order of 100-200 ms roundtrip times across the Atlantic or Pacific, because that’s how long it takes light to travel forward and back.

Considering that people are buying low-latency gaming monitors to shave off single digit milliseconds worth of latency; and that you need at least 2 roundtrips to reach consensus on an IC subnet; hundreds of milliseconds (whether 200 or 500 or 1000) are not going to convince anyone to play games in the cloud as described by the OP. Hundreds of milliseconds of latency is actually jarring for anyone, not just gaming enthusiasts.

4 Likes

Of course realtime computing on the IC in impossible until someone figures out quantum entaglement based connections, I asked cause as far as I know latency and bandwidth are more of a bottleneck than computing power to scale the IC, so if we could reduce roundtrips by 20% the whole network would benefit from it.

2 Likes

I am also interested in this question. We are doing MMORPG on IC and multiplayer worries us

Although quantum cryptography will very likely be a game changer, entanglement doesn’t look promising as a method for communication. The problem is the apparent impossibility of manipulating an entangled particle without breaking its entanglement.

As far as I can see, the options for multiplayer gaming across the Internet Computer are:

  1. Create specialized low latency subnets (which suffer the limitations listed above)
  2. Create new types of games that take advantage of the IC architecture

Many people considered phone gaming to be utterly inferior to PC/console gaming until they first experienced Angry Birds.

Similarly, rather than trying to port legacy game parameters, I think the challenge for game designers is to figure out how the IC allows for entirely new types of games.

Figure that out and you’ve got not just a successful game, but a successful IC.

2 Likes

@free This has been agitating me since you posted it. Have you thought any more about how to incorporate ultra low latency potential into the broader IC ecosystem?

Taking the MMORPG example, could it make sense for consensus and replication to be abandoned within a closed system that can still report the important things (loot, stats, etc) back to the blockchain?

Or, given the speed of light limitation, could introducing some sort of localized consensus mechanism help?

Yes, you should be able to run a subnet consisting of replicas within the same data center / location. Or even a single-replica subnet.

You can definitely run a traditional Web2 application and have it record stats and whatever else on-chain. Or even have a canister use HTTP requests to poll your Web2 application for the stats. But at that point you can no longer claim it’s a Web3 dapp.

If you want tamper- and censorship resistance, there really isn’t any way around geographically distributing your application. And if you distribute it around the world, then something on the order of 1 second is the best you can achieve in terms of consensus latency. If you give up some of that (and accept that e.g. the U.S. government has jurisdiction over all physical machines that run your application) then you can get lower latency / higher throughput. It’s as simple as that (as far as I can tell).

3 Likes

Just read this old thread and thought it would be interesting to continue the conversation.

I’m working on a related idea introduced here. In summary: an orchestrator canister controlling non-replicated instances which developers can use to run the services that they currently run on AWS.

Important to underline that this is not meant to substitute the IC but to substitute what cannot currently be done on the IC and therefore is run on AWS.

As @free mentioned, I fully agree that by using these non-replicated instances you would lose “tamper- and censorship resistance” however I think we can still manage to create an infrastructure that is better suited to the needs of IC devs and their communities (payment with tokens and deployment performed automatically from a canister instead of manually by the dev team).

In particular, we could provide tamper-resistance by requiring SEV-SNP and censorship-resistance by only relying on these instances for stateless services, so that if one instance goes down the orchestrator canister can immediately deploy the service on others.
This is also in line with keeping the state (and everything else that can be hosted) on the IC while relying on these other instances mostly for “side services” which, while necessary, are not critical to the security of a dapp.

7 Likes

I think its a good idea.