Let's Review the AO Whitepaper's Characterization of ICP and AO's security model

@lastmjs for example… i found the paper to have all this talk about centralization, consensus, etc…

but when it came to the basics: compute, memory, determinism, ecostsystem, i found it lacking. I dont think this is an AO problem. I think this a median example of the writing i dislike in Web3 papers or academia as a whole :slight_smile:

2 Likes

I think their main advantage, which is also true for MegaETH and modular approaches, is their flexibility in using different kinds of nodes. From super big sequencers in datacenters to light nodes in phones.

Besides being able to run AI, another notable feature is their capacity for very low latencies—in the case of MegaETH, as low as one millisecond.

I wonder if it’s on the roadmap or even possible to provide such services using the IC. For some applications 2 seconds (and more) of latency feels like an eternity. @diegop

Real time is usually 100 milliseconds.

A good latency in centralized web services is 20-50.

1 millisecond (in AI of all cases) is enough for me to say:

  1. I’m suspicious
  2. Posible if I dive deeper it will make sense, but I’m tapping out here at diving deeper
1 Like

Yeah, agreed. Maybe it’s just pre-confirmations. My point is that it would be nice for some subset of devs to have much lower latency.

In general, providing a more flexible environment, apart from the current IC implementation, could be decisive in attracting more devs. I still remember the times when we had talks of Badlands but ended up with arguably the opposite (Utopia).

Flexibility in terms of latency, governance (zero governance to NNS), computation (AI), async to sync, etc. The perception from devs outside of the ecosystem is that we’re very rigid and all controlled through the NNS, plus the typical suspicion that it’s centralized, given the genesis chart.

1 Like

The two seconds is only for update calls.

A lot of the web has similar for writing to DBs. If you like a comment in Instagram, the instagram app will show you liked it “immediately” but actually the DB recording it may take 2-3 seconds to record globally.

App developers in 2024 are really good at mixing frontend, high latency, low latency calls. I suspect IC devs will find similar patterns

Query calls are 200 milliseconds and I suspect we are not using them enough in IC ecosystem.

Also, I don’t know how much potential there is on social networks. Perhaps blockchains are mostly meant for DeFi. In that case, lower latency is important.

In general, whenever a customer (a.k.a. a developer) comes with a suggestion, we should probably listen to them instead of trying repeatedly to instill the idea that the IC is perfect as it is. Many successful companies have a customer-obsession mindset, even our evil competition, AWS.

I agree with your sentiment. I am sorry I came off that way. You are right.

My intent was not to say that the IC is perfect.

My intent was more to say:

I personally believe Latency is a red herring for update/query calls… I personally believe there are half a dozen more vital blockers

I should have been more clear.

4 Likes

They came with this thing, thoughs ? @PaulLiu

I would assume this is completely garbage, simply a joke…

Now that AO went live, has anybody got to know more about the inner working of this platform?

1 Like