For example, canister A is deployed in subnet SF. while canister B is in subnet NY. Could they communicate from an IC, or from a different IC (for instance, IC testnet) if possible?
Canisters can communicate across subnets regardless of location and it’s all handled for you, it’s opaque to the developer so you don’t need to do anything.
Re the second part, you could send messages from your own networks too, if you were running a private internal corporate network for example, you could have this connect out to the main network.
Are there any basic architecture diagrams showing how consensus works for all these subnets (individual blockchains) without the heavy maths just the core principals. Something like an end to end flow diagram of a request-response to a backend canister.
I have concerns if each subnet manages its own consensus if there are only a handful of data centres in each subnet. What is the average number of data centre owners per subnet. Will this ratio increase? If not the way dfinity scales would feel quite centralised.
Anyone? I would be really interested in understanding this.
I can help clarify here.
First, the most important detail: The NNS governance system(so everybody) decides what size subnets we create. So if people want that, they can create it. There is no real technical reason why subnets need to be any particular size (the foundation has tested much larger ones)
At Genesis, NNS Subnet started with 28 nodes, and application subnets had 7 nodes each (and the NNS subnet had many more), this was a starting number in order to test more subnets as nodes came online. Yes, a canister can call canisters in any subnet, doesn’t matter.
Protocol’s intent is to maximize node ownership independence but also geographic variance. We do not want all nodes (or a majority) to be in a certain region, country, etc.
One of the IC’s key features is resumability. That means that it is relatively trivial for a node to leave or join the network without affecting the liveness of a subnet or taking a long time. This is important to us because we have seen some projects decrease in the number of nodes as it gets harder and harder to become full participants in a network. It is also important to us because we want to be able to resume a node without downloading the entire state and do it fast. You can see the video on catch-up packages as part of Chain Key technology.
I recommend you check out the technical AMA on from consensus team. It goes into depth: https://www.reddit.com/r/dfinity/comments/nerppg/ama_we_are_manu_paul_and_diego_we_have_worked/gypf6m0/?utm_source=share&utm_medium=ios_app&utm_name=iossmf&context=3
Sources I use:
- Technical blog post on Consensus - https://medium.com/dfinity/achieving-consensus-on-the-internet-computer-ee9fbfbafcbc
- Rust code of Consensus layer - https://github.com/dfinity/ic/tree/master/rs/consensus/src/consensus
- Early draft of Consensus academic paper - https://eprint.iacr.org/2021/632.pdf
- Video explanation of Consensus - https://www.youtube.com/watch?v=vVLRRYh3JYo
Hope that helps! If not, let us know, I’ll check back on this thread periodically.