Shuffling node memberships of subnets: an exploratory conversation

Welcome to my life, Jordan :upside_down_face:

6 Likes

Attack vector: Node operator collusion

One major attack vector is the collusion that is possible between node providers. Node providers are currently publicly known, and even if they weren’t publicly known, it is feasible that the node providers could use other means through personal networks, google searches, relationships with data centers, etc that would allow them to find each other. 7 colluding nodes could delete every canister on the subnet.

The fact that each subnet has a relatively low replication factor (compared with other blockchains) makes it relatively easy for node providers to find each other and prepare for an attack. For example, on a 7 node subnet 3 colluding nodes can halt a subnet, and 5 colluding nodes could perform a potentially undetected attack and have full access to state changes and possibly more (if I am wrong on the math or the capabilities let me know, I think I am generally correct here).

If I am a node provider, I only need to find two other node providers to cause havoc to a 7 node subnet. Obviously increasing the replication factor would help, but shuffling may help us achieve higher levels of security with lower levels of replication, which is ideal for cost and other reasons. And I think it would be wise to add as many feasible mitigations as possible to the Internet Computer so that it can be incredibly secure.

Since subnet membership is basically created once at subnet inception, node providers have an indefinite amount of time to start colluding and preparing for an attack. The fact that canisters are currently running in plain text on the nodes and other information is known about them (I believe you can easily find out which subnet a canister belongs to?), it is relatively easy for node providers to even target specific attacks against canisters.

Hiding the canisters from the node operators I think is a separate problem that can be mitigated with secure enclaves and possibly other technologies or techniques. But even if node operators don’t know what canisters they are running, they can perform an indiscriminate attack with regard to canisters and just attack the subnet. If they’re lucky, they’ll be able to get a juicy reward from canisters within their subnet, and maybe even affect other subnets that are depending on canisters within their subnet.

Though subnets are islands of consensus, it seems very unlikely that one subnet shutting down would not affect other subnets, since canisters will start to depend on other canisters in other subnets.

Shuffling the nodes would help to destroy node operator relationships within subnets. As soon as a relationship were formed, it may just as soon be destroyed.

Now to make this worth it, the network would need many many node operators, the more the better I would think. I will discuss this in further comments.

14 Likes

Attack vector: Miner Extractable Value (MEV)

Another possible attack vector is Miner Extractable Value (MEV), a problem that plagues Ethereum as we speak.

My understanding of MEV is this: Ethereum clients are modified to look for lucrative transactions going to certain smart contracts (for example, Uniswap). The modified clients (let’s call them MEV clients) are designed to find these transactions, reorder them, and place the MEV client owner’s own transaction in front, a transaction that would take advantage of the knowledge gained by viewing the order and gas prices of all transactions targeting certain smart contracts. Front running I believe is the general term for this type of behavior. There may be other forms of MEV, but the above is my understanding.

So basically, a miner can take advantage of its position of knowledge and reordering power to make money. This causes various issues, and IMO is not desirable and something to be avoided. It’s a big problem with Ethereum and they’re having a hard time dealing with it. It seems they’ve basically accepted it as unavoidable and are now dealing with it as a necessary evil.

The question is, would this be possible on the Internet Computer? I believe yes, assuming a subnet had 2/3+ of nodes running a modified replica designed to take advantage of certain canisters.

How can this be mitigated? Secure enclaves is one solution, assuming we can get attestations from the enclave that the replica has not been tampered with.

Another solution is…you guessed it, node shuffling! If as a node operator you never know which subnet you’ll belong to, and thus which canisters you’ll be hosting, it would become difficult to install the correct modified replica to take advantage of MEV. And 2/3 of the other nodes in the subnet would also need to install this software, so even if you convinced a cohort of buddies to join you in running the modified replica, you’d have to hope you’d all be shuffled into the same subnet that hosts the canisters the modified replica is designed for.

I’m not an expert on MEV and my understanding could be off, but as I see it node shuffling would provide an additional layer of protection against MEV. Combine that with secure enclaves that have code attestations, and I believe it would become nearly impossible for MEV to exist on the IC.

11 Likes

Attack vector: Secure enclave side-channel attacks

Secure enclaves will be an excellent addition to the security and privacy of the Internet Computer, helping to hide the execution and storage of canisters from node operators. This will make it hard for node operators to collude, considering if you don’t know which canisters you are hosting it will be hard to perform a targeted attack against an individual canister. It will also make the IC more private, as it will become difficult for node operators to reveal canister data intended to be private and accessed only by authorized parties through the canister function interface.

Unfortunately, based on all of my learning on this subject over the past couple years, the consensus is that secure enclaves are not perfectly secure and probably never will be. There seems to be a catch-all attack vector known as a side-channel attack. Side-channel attacks in the context of secure enclaves are basically indirect attacks that use information such as power consumption, sound, electromagnetic radiation, and possibly other sources of information to read the supposedly “secure” or “private” information within the enclave.

This is where my knowledge really breaks down…I am not sure how long it takes for these attacks to be carried out. But my intuition tells me the attacks would not be simple, and would take a long time to perform, like hours, days, or weeks. Sophisticated equipment may be necessary, and the attack may need to be very specific, like to a canister, so again knowing which canisters are running on a node would not help this situation.

Assuming the above is all true or close to it, node shuffling again helps! As soon as an attacker has all of their equipment set up, the original canister they thought they might be targeting might have been whisked away. And even if the attacker is indiscriminate, the fact that the canisters running within the replica are always changing would not help them determine the patterns that they might need to perform the attack.

Instead of a node operator knowing which canisters are running on their nodes, and having indefinite time to perform a side-channel attack, we could greatly narrow that window, and hopefully make it shorter than a conceivable attack would take.

8 Likes

Attack vector: Uneven distribution of Byzantines, or lack of security as a network effect

This attack vector (if you can even call it that) perhaps embodies the core of my arguments for why node shuffling is necessary for the security of the Internet Computer.

Right now, security is not a network effect of the IC. As more nodes are added to the IC, the IC as a whole does not become more secure. Adding nodes to the IC only increases the throughput of the IC, and/or the security of individual subnets. Each subnet is an island of security, and is only in charge of securing itself. It does not inherit security from the security of other subnets, only in that ingress messages from those subnets would be more secure, and egress messages to those subnets would be more secure.

This is not ideal, because the distribution of Byzantines could become too concentrated within individual subnets, even if the BFT requirements of the IC as a whole are held. Phrased differently, even if 1/3 or fewer of all IC nodes are malicious, individual subnets can currently have more than 1/3 malicious nodes. Individual subnets do not inherit the security of the IC as a whole, they are left to fend for themselves.

With node shuffling, assuming proper randomness that guarantees proper probabilistic distribution of nodes across subnets (I am assuming this is possible), if the IC as a whole had 1/3 or fewer malicious nodes, then each subnet would also have 1/3 or fewer malicious nodes.

Security as a network effect thus seems like a very desirable property to have, and it is lacking from the current Internet Computer.

9 Likes

And here are a few previous discussions on this topic by myself and a few others:

1 Like

Designed appropriately, I think the laws of probability would prevent this exact situation from happening. I remember earlier DFINITY materials which explained that with sufficient numbers of committee membership, the random beacon driving probabilistic slot consensus or some other complicated mechanisms would ensure with outrageous (I think the more correct term is overwhelming) probability that no individual committee would be more than 1/3 malicious. The committee size was something like 400, and committees were chosen out of the total number of nodes participating in consensus. Each committee was in charge of validating for a certain period of time or something like that.

3 Likes

Attack: Censorship

If some organization wanted to stop a canister they didn’t like, they would currently only have to take down a few identified nodes.

If nodes were shuffled daily at random times, it would be very difficult to keep up with attacking the right nodes quickly enough before the canister gets passed on to new ones.

10 Likes

It seems the main question being posed is “Which attacks are potentially mitigated and which new vulnerabilities are opened by implementing shuffling node membership of subnets?”

The main attack vectors that could be mitigated seem to be well described by the others here. I see daily or similar interval of node shuffling as significantly raising the difficulty of node operator collusion or censoring nodes. The net effect seems to be increase the security of any individual subnet to approximate a subnet with all existing node operators as members rather than just the smaller node membership of any subnet.

To me it seems the concern for new vulnerabilities center on discoverability of other subnet members. As @diegop explains, a contingent of compromised (threatened, corrupted, hacked, etc.) but ostensibly honest nodes could eventually be randomly shuffled into a subnet together (which given enough shuffling should eventually happen), discover the joint membership of the other corrupt nodes to be >2/3, and coordinate to tamper with their subnet’s canister, especially if that canister holds valuable digital assets or information. The best protection against this vulnerability would be to shield the identity of the node operators from one other in any subnet, which I do not know whether it is feasible to do.

However the status quo now is that any motivated actor or group of nodes could work to corrupt known nodes on known subnets whose membership is static. I would rather have the moving target problem of shuffling subnets spontaneously being assigned compromised members than the non-moving target problem of static subnets with fixed (although distributed) points of failure.

4 Likes

I don’t have any new attack vectors to contribute to the discussion, but I think the ones that have been laid out so far seem valid, are concerning to me, and are worth preventing. Obviously the Dfinity foundation has to juggle competing concerns on what to prioritize next so this may not be at the top of the list, but I believe they should be addressed at some point.

I’ve spoken with @lastmjs before about some of these attack vectors and they seem theoretically sound, and potentially not that difficult to even pull off. What worries me even more is that even though Jordan is very smart and has obviously thought about this a lot, he isn’t even a security expert by trade. It makes me wonder what other even more obscure vectors might be discovered by those who have studied attack vectors for years.

I’m not saying that node/canister shuffling has to be the solution to these problems. Maybe there’s a better way to tackle them. Or maybe there’s some reason that these aren’t valid. But if these attack vectors are as valid as they seem, and we know about them, it would be foolish to ignore them, regardless of how low the probability seems. As the internet computer grows so to will the incentive to exploit these. Once again, we may not need to fix them as the very next roadmap item. Hopefully for now the care we’ve put into selecting node providers will keep us safe. But we should have a game plan on these and shuffling seems like it would do the trick nicely.

Also, thanks for everyone’s comments so far. I’m not in the weeds enough yet on this so this thread had been really enlightening.

6 Likes

Thanks everybody for the insightful comments! I understand that it’s a lot of effort to write all these thoughts down in detail, but I think this really helps for the discussion. To keep an overview, so far people brought up the following possible advantages of shuffling:

  • node operator collusion (a majority of the nodes of a subnet are malicious and tamper with the subnet state)

  • security as a network effect (very related to the point above)

  • miner extractable value / front running

  • attacking enclaves

  • censorship resistance

I’ll first reply to the ones that I find less convincing, and then comment on the ones that i think are the best arguments in favor of shuffling.

miner extractable value / front running

A block maker in any blockchain can order messages in any way they like. That is because we haven’t agreed on an ordering yet, we are placing this on a blockchain to reach agreement on an ordering. This means that a malicious block maker can always try to order messages in an order that is favorable for itself. A concrete example of such an attack is if the block maker sees a big sell order to some dex coming in, it can quickly place a sell order itself first, and make sure that appears in the block before the other sell order. Note that this attack only requires a single malicious block maker, it does not need a collusion between multiple node operators, because the MEV block maker will still propose a block that only contains valid messages, so all other replicas will accept this block. They cannot know that the MEV block maker is actually front running, because there is no agreed-upon ordering of messages yet.

Another solution is…you guessed it, node shuffling! If as a node operator you never know which subnet you’ll belong to, and thus which canisters you’ll be hosting, it would become difficult to install the correct modified replica to take advantage of MEV. And 2/3 of the other nodes in the subnet would also need to install this software, so even if you convinced a cohort of buddies to join you in running the modified replica, you’d have to hope you’d all be shuffled into the same subnet that hosts the canisters the modified replica is designed for.

See my point above, this attack can be done by a single malicious replica, you don’t need a majority. The idea that the node operator/replica doesnt know which subnet it belongs to and which canisters it hosts is very difficult to realize: the replica must only accept messages to canisters that are hosted on the subnet you belong to, so you couldn’t properly validate blocks if you dont know which canisters you’re running.

How can this be mitigated? Secure enclaves is one solution, assuming we can get attestations from the enclave that the replica has not been tampered with.

I think this is a promising way to address MEV, and the foundation is already investigating how this can be done. So if we run this in some TEE and require attestations, it will be way harder to run a malicious block maker. Other countermeasures can be taken in the dapp itself. For example, if you are a dex, you could think of batching transactions and using secure randomness to shuffle transactions and execute them in a random order, making front running much harder.

Attacking enclaves

That’s an interesting point that I didn’t think about yet. I’m not an expert on these things, but I would imagine that best way to get info out of the TEE via side channels is to target the encryption key via side channels, not to attack canisters directly. Changing subnets would not change your TEE, so then shuffling wouldn’t “reset” the side channel attack. I’ll try to get some of the experts involved in this one.

censorship resistance

Right, so an attacker targeting a certain canister can figure out which subnet that canister is one and attack nodes from that subnet. @MalcolmMurray, what type of attack are you thinking of here? And how quickly do you imagine nodes would be shuffled? If a couple of nodes are swapped out say every hour or so, and the attack is just a simple DoS attack, then I’m not sure shuffling helps: an attacker could easily keep up with the subnet membership changes because it’s easy to target a new node with a DoS attack. For attacks that take along time to perform, shuffling could indeed help.

Now, I think the main ones are left.
node operator collusion and security from the network effect

This is a super important one and I think the core of this discussion. The argument of “security from the network effect”, so the bigger the ICP grows, the more secure it becomes, is obviously super compelling.

So the internet computer could constantly switch nodes between subnets, such that for example, it makes sure that a node never spends more than 1 week on a given subnet. If we further assume that attacking a node takes > 1 week, then the adversary can no longer target specific canisters/subnets, which is a nice property to have.

Designed appropriately, I think the laws of probability would prevent this exact situation from happening. I remember earlier DFINITY materials which explained that with sufficient numbers of committee membership, the random beacon driving probabilistic slot consensus or some other complicated mechanisms would ensure with outrageous (I think the more correct term is overwhelming) probability that no individual committee would be more than 1/3 malicious. The committee size was something like 400, and committees were chosen out of the total number of nodes participating in consensus. Each committee was in charge of validating for a certain period of time or something like that.

Right, you’re referring to our old whitepaper. This is where things get more tricky. We now essentially sample a new subnet membership every week, and here we want to then draw from the security of the overall amount of nodes. Suppose our total pool of nodes is very large (so i can get away with doing binomial distributions). Let’s do some computational examples. Suppose up to probability_corrupt of all nodes are corrupt, we sample a subnet of size subnet_size, then we compute (using binomial distribution) how likely it is that we select a subnet with more than 1/3rd of the nodes being corrupt, which i call p_insecure. Below you see some examples, here is the google sheet i used.

So what you see is that if for example we assume 1/10th of all nodes in the IC are malicious, and we randomly select a 28 node subnet, there is a 2^-12 probability that the subnet is unsafe, because more than 1/3rd of the nodes are corrupt. This is a small chance, so that is good, but if we regularly choose new subnet members, then every time the sampling needs to be successful. If we have 50 subnets and do this every week, we do this ~2500 times per year, and each one of those must be successful. I think this is the main price of the reshuffling. If we don’t reshuffle, we don’t need to get it right 2500 times per year, but just once, which is obviously a better chance, so theoretically we could tolerate more corruptions overall. How do people feel about these numbers?

So to recap, based on these numbers, I think we can say:

  • shuffling nodes is nice against “adaptive” adversaries that target one subnet, assuming it takes some time to corrupt a node
  • shuffling nodes is not so nice in the sense that we select new subnet memberships every time, and each time, we must select a secure set of nodes. We can get unlucky by combining too many malicious nodes, that can then collude to break the subnet
  • static subnets are weak against adaptive adversaries
  • static subnets are better against static adversaries
10 Likes

I think the question comes down to what is a more realistic modelling assumption. An adaptive adversary seems much more realistic to me, where there is some probability that a node is corrupted over time - say 5% in any time interval. In this real-world scenario you may expect for a node that has been corrupted to stay corrupted, so you would effectively “running repeated trials” - the only difference being that by not shuffling you have no mechanism to
prevent the build up of colluding node providers.

Also a factor of 2500, doesn’t sound too high. I would imagine that this could be compensated for by a small increase in the number of nodes running the consensus? It should only need a logarithmic increase in the number of node providers?

6 Likes

Beautiful points. To further crystalize the implications of your fault analysis @Manu, I added some columns and resulting graphs that forecast the odds of staying at or below Byzantine Fault tolerance in a canister. This is all as I best understand it, but someone please correct me if I am wrong. I take the time interval over a hundred years for various shuffling intervals (daily, weekly, monthly, quarterly, yearly, and never) and take the odds of a halt-free (no faults) canister for 1 instance of a subnet (1-p_insecure) raised to the power of the total number of shuffles (100 years*#ofShuffles/year), shown below, but all available at this google sheet.

Basically, the message of these graphs is the less shuffling the better to avoid matching corrupted nodes into a subnet, assuming a given level of corruption. Having more nodes in a canister obviously helps reduce the likelihood of a fault. (In the spreadsheet I calculate up 50 and 100 nodes too, but omitted here for the sake of space).

However, I think the model assumes a percentage of corrupted nodes regardless of shuffling, even though intuitively it seems the temptation to collude in a static subnet should be more and would grow over time. If we assume that staying on a subnet for a long time comes with a greater temptation to collude, then we would have to run some new numbers. The problem is, the math gets more complicated and requires its own batch of assumptions. Maybe someday soon.

9 Likes

Very nice @rubenhorne!

I think Rubenhorne’s data above actually shows that it still hurts quite a bit. We need that subnets are going to be secure with overwhelming probability. We see that if we assume at most 5% of all nodes are malicious, and we sample a 28 node subnet, the probability that it will be insecure is 2^-20.78, so almost 1 in 2 million. That seems pretty good. But now if we do this 2500 times and we need to be right every time, it’s only 2^-9.5, so more than 1 in a thousand. I think that probability is not small enough to call secure.

We can indeed increase the subnet size to counter this. As an example: sampling a 28 subnet once (with overall corruption 5%) has roughly similar success probability as sampling a 46 node subnet 2500 times.

7 Likes

I agree with the importance of the assumption of adaptive adversaries. I’m more concerned with node operators that start out honest or seem honest at first, and then start becoming corrupted over time.

I have some rebuttals to arguments made and some deeper information I hope to get out next week.

6 Likes

I will point out that 4 minutes is very much a theoretical lower limit. In practice, it would be on the order of hours rather than minutes, for a number of reasons:

  1. Some node operators only have 1 Gbps and that may only be burst traffic, not sustained. I believe we currently require 10 Gbps from new node operators, but again that may be for burst, not sustained traffic. As a specific example, a few months ago when I was testing using QUIC for XNet messaging, the highest sustained throughput I saw between Europe and US West Coast was 400-500 Mbps.

  2. Most subnets are well below the 300 GB limit right now but that’s exactly because we explicitly tried to delay this occurrence for as long as possible (by adding subnets to spread the load) specifically so we don’t require many, many hours to state sync a node should it need replacement. In the long term we will want to use as much of that 300 GB capacity in order to make the IC economically viable.

  3. An admittedly less important, but still worth mentioning point is that transferring 300 GB assumes the state of the subnet does not change (much) for the duration of the state sync. This may be a good enough approximatioon if state sync actually takes 4 minutes, but if we go with a more realistic 1 hour estimate, then by the time we’re transferred the whole 300 GB it is likely that some of that will have changed. So we’ll need to state sync those changes and then do that again and again, until we’re fullly caught up. Which means it is likely we may need to transfer quite a bit more than 300 GB.

  4. Even given an actual 10 Gbps of sustained bandwidth per data center, there is an underlying assumption here that nothing else is using that bandwidth. I.e. no gossip, no ingress or canister-to-canister traffic, no replica or OS upgrades and (more importantly) a single node doing state sync at a time. Some discussion here suggested swapping out a node every 1 hour.If so, you would definitely have more (and possibly a lot more) than 1 node state syncing at any given time, resulting in only a fraction of this (very much theoretical) 10 Gbps available to each node. If you now have to do an emergency swap of a node on a subnet (because some othe rnode went up in smoke or whatever) the situation becomes even worse (particularly for this emergency swap).

All of the effects above are cumulative: if you only have half the theoretical 10 Gbps available for sustained traffic; the subnet size is very close to 300 GB; you need to transfer 150% of the state size before you’re fully caught up; half of the available bandwidth is used by higher priority traffic (gossip, ingress, XNet); and one other replica is doing state sync; you suddently have 1/8th of the bandwidth to transfer 450 GB, i.e. you’re 12x under the theoretical maximum.

10 Likes

I’m thinking more along the lines of political or even military attack. The more often the better, subject to practical constraints.

3 Likes

“Adaptive” adversaries who are tempted by specific target seem more realistic and relevant to me than “static” adversaries who are malicious on principle and would attack indiscriminately.

Also, if I understand correctly, this analysis relies on the assumption that these universally malicious nodes would instantly know when they have been grouped in a subnet with collaborators. I guess this is possible if they know each other ahead of time, but I think it’s more likely that they would need to discover each other, which is a risky and time-consuming process, made less likely by node-shuffling.

5 Likes

Something I am considering - with Security Enclaves we gain the most important protection - namely that the canister’s data and code remains unreadable even in the face of a 2/3rds colluding subnet. Under that assumption - our attack vectors are primarily censorship, side-channel canister attack and indiscriminate subnet attack.

Censorship - a set of malicious nodes with economic or non-economic motivations to “silence” a particular canister can selectively halt messages to that canister.

Side - channel canister attack - a subnet must be able to read ingress messages for a canister to some extent - even with homomorphic encryption you’re still going to have a blob of canister-id + somedata, it’s conceivable that given sufficient time a malicious subnet could derive correlations between ingress messages and egress messages and “public” outputs.

Indiscriminate Subnet Attack - 2/3rds malicious nodes can decide to indiscriminately halt the subnet operations.

For these possibilities - I do forward the idea that reducing the amount of time a given canister spends on a particular subnet (from currently “time infinite”) to some bounded period would mitigate these vulnerabilities. Censorship and subnet attacks in particular become less feasible if canisters are going to be reassigned to different subnets in some period and side-channel attacks on the canister can theoretically be done by any single node in any subnet. Shuffling canisters would limit the amount of time an attacker has to derive these inputs and outputs.

With shuffling of canisters here - calculated against the probability of a given subnet being malicious with 33% malicious nodes on the network (p_halt ^ 2 * 2(duration_shuffle) we show 35% downtime on canisters (vice some canisters being affected 100% and some others being unaffected), and while the total amount of downtime remains the same for a given p_halt - a randomly selected set of hours or minutes is much less effective for the attacker than being able to take a canister down entirely for months or years. (Unfortunately, I can’t link the sheet I used to calculate this out)

I might be missing something here - but I think randomly reallocating canisters to subnets also has some other advantages -

  1. A given canister state is going to be much “smaller” than the whole subnet state. A few GBs is the absolute maximum we would have to transmit at any time.
  2. While subnetworks do not gain from the total security of the network - the canisters running on them do.
  3. If we still shuffle subnetworks semi-randomly but at less frequency - it limits the ability of an attacker to take over a given subnet and wait for the target canister to appear while reducing the impact of randomly-bad subnets.

I’m leaving out the discussion of takeover and reading canister data - because I feel that is a risk that deserves it’s own set of solutions and can’t necessarily be adequately addressed by “shuffling” solutions - because even one p_takeover occurring without security enclaves would lead to the malicious actor being able to read all code + state in the canister. I can’t post links so here’s the output - contact me if you want to see the sheet I based this on:

2 Likes

What if we could transfer canisters between subnets? E.g. to have a special API, that once called will make the canister reappear at some other random (and available) subnet. The canister itself can cover all the expenses. The developer behind the canister can decide for themselves whether they want to use this mechanism to provide a more secure experience or not.

Like an update_code call, but update_subnet.

1 Like