The Internet Computer in a world of modular blockchains

I think they do need consensus, but since adding each roll up only increases gas use by a log function it pays for everyone to roll their transaction into one. The base proof is expensive it is only after chaining a few proof through that you get to a net positive. After that everything is gravy.

1 Like

Isnt that what they do now? I thought the big rollups like Optimism and StarkNet basically have just one sequencer that batches all the transactions. I think they want to decentralize it at some point but I dont think it is now.

True, but can they somehow utilize the data L1 for storage?

1 Like

Correct. Celestia optimizes the L1 for this purpose (Data availability sampling).

ZK proofs are the cherry on top as they mean a single node can run present valid blocks and that means the blockchain doesn’t need to be slowed down by bft consensus, this can speed up latency but doesn’t give you much on the throughout of the ic. Plus you are now slowed down by the speed at which you can generate zk proofs which may be slower in some cases than the bft consensus.

ZK proofs introduce other necessities like the need for consensus on data availability, which celestia is used for. I don’t think it’s the only solution by a country mile… what Dfinity has going is a more pragmatic and simpler way to scale imo.

It’s simpler but is it as secure if subnets dont share consensus?

1 Like

Was thinking about this question and I wonder if we could have an model where each subnet works a bit like an arbitrum anytrust chain where if any node disagrees with a state change it posts data to another subnets and a random sample of all nodes across all subnets do the fraud proof. If there is fraud the NNS rolls back that subnet and punishes the involved nodes.

1 Like

Node shuffling would basically enable shared security, or am I mistaken?

It‘s a nice threat, we need much more discussions around the topic of shared security. It‘s literally all that matters in the end.

Concerning scaling, personally, I find it fascinating that something like chain key tech casually comes out of the blue and is incredibly useful, while nobody outside of the IC ecosystem seems to understand it‘s implications.

This makes me hopeful that the history books on scaling have not been written yet and that we might see unexpected solutions.

For example, I generally like to draw comparisons between democratic systems and blockchains. Both delegate a task to a group of people (politicians/miners) while preventing them from acting maliciously by either seizing power or double spend. In democracies we can use our built in identity systems/sensors as well as efficient punishment mechanism (prison) to disincentivize malicious behavior. We can basically penalize malicious behavior beyond any kind of „stake“ politicians could theoretically be required to put up by putting them to prison.

Additionally, we can choose to pick politicians that are less likely to collude based on their identity, e.g. each coming from a different state for example. If we would not do that, we would potentially also need much more politicians to ensure that they are sufficiently unlikely to collude. The analogy here is that if we want to ensure that miners don‘t collude we can either have a lot of them, or fewer that are very unlikely to collude, which could potentially be achieved by them not being anonymous. The later allows us to simply increase hardware requirements to scale (throwing hardware at the problem).

So having an on-chain governance system might allow us to build secure systems that scale by throwing hardware at the problem without loosing security. That might be an unconventional way to scale that could just work. My thoughts are incomplete around how/if the IC is doing this exactly and if it really is robust but having known entities as nodes is definitely going in that direction.

I think these aspects are often completely forgotten when devs discuss scaling. A game theoretic problem plays into the whole thing, so computer science and cryptography are not necessarily the only tools we have to solve the problem.

2 Likes

It would improve it, but not sure I’d call it “shared”

And yeah, knowing the identities of the nodes + on chain governance is interesting, altho as we’ve seen it comes with some pretty serious drawbacks (Mario).

Pretty sure that’s not true

1 Like

Yeah went back and checked, I remembered that one wrong :slight_smile: thanks.

Theres more details in here

1 Like

Would you mind elaborating on why you would not consider it shared?

My basic understanding is that game-theory wise we try to trap miners in a coordination game, where each does not have enough resources to bring us to another (worse) equilibrium (of the whole thing not working). With sharding, we try to detach that constellation from the replication factor of state. When doing so we can essentially have the same miners run more Blockchains simultaneously. To still trap them in that game-theoretical constellation we randomly switch them out as often as possible. To allow the sub-nets to communicate we either use a beacon chain, which is run by everyone and thus becomes the new bottleneck, or else we brilliantly come up with chain key tech to allow them to talk to each other directly, thus getting rid of the bottleneck (just to briefly celebrate that technology :)).

Besides chain key tech, it’s basically what Ethereum aims to do right?

If you refer to it only happening once a day or less than that, then I agree that more often is always better but would say that this is unexplored territory. Thanks for looking that up btw :). If you’re thinking of something else, then please share, I’m eager to learn more.

1 Like

Yeah this. Am not an expert on the subject by any means, just trying to start a discussion and hopefully bring more awareness of whats going on in the wider crypto ecosystem to the IC community. There’s a tendency to ignore the rest and think the IC is somehow superior in every aspect. While the IC is underrated in many ways it can be a dangerous mindset to have. There’s many exciting things going on beyond just the IC.

4 Likes

I agree we should definitely keep tabs on going on outside the IC. I find ZK rollups very interesting. What do think are some of the drawbacks using rollups?

Not sure how interoperable dapps built on them are. First of all them may be limited to using the same VM as the L1 they are connected to. And also I dont know how practical it will be for dapps deployed on different rollups to talk to eachother, maybe it’s possible since they should all have access to the same L1 data/blocks.

1 Like

Just found out L3 is a thing Fractal Scaling: From L2 to L3. It’s layers all the way down | by StarkWare | StarkWare | Medium

1 Like

Yeah interoperability is an active area of research, altho since different rollups share the layer 1 they actually dont need chain key to communicate to each other as I understood it.
They can use any VM they want btw, some cool ones are indeed starkware, https://zksync.io/ https://risczero.com/ (ZK RISC-V) https://cartesi.io/ (Optimistic rollup RISC-V + Linux)

In the case of ZK rollups they have to use the same VM as the parent layer, because they parent layer has to be able to run the rollups transactions to verify them when needed.

I think you are mixing it up with optimistic rollups, but even there its not required to do all the computations. Only the part they disagree on Inside Arbitrum · Offchain Labs Dev Center

In the case of cartesi its a RISC-V interpreter thats implemented on top of EVM on layer 1. Sounds crazy, but since it only has to run a single instruction it doesnt matter.

Optimism is similar but uses MIPS

  1. Run that single instruction on the L1 chain. minigeth is compiled to MIPS because it’s easy to write a simple on-chain MIPS interpreter (only 400 lines!).

This is the great debate: who will win? Rollups or high-performance L1s???