Long Term R&D: Privacy: MPC (proposal)

1. Summary

This is a motion proposal for the long-term R&D of the DFINITY Foundation, as part of the follow-up to this post: Motion Proposals on Long Term R&D Plans (Please read this post for context).

This project’s objective

Since computation is replicated on many nodes, users risk one of the nodes may leak data. Cryptographic protocols for multi-party computation enable several nodes to jointly compute a function on confidential data without revealing the input or intermediate data. In theory, any function can be securely evaluated with MPC but users pay a performance overhead. This research will provide MPC functionality on the IC for users who need strong privacy guarantees aiming at both versatility and practicality.

2. Discussion lead

Jens Groth

3. How this R&D proposal is different from previous types

Previous motion proposals have revolved around specific features and tended to have clear, finite goals that are delivered and completed. They tended to be measured in days, weeks, or months.

These motion proposals are different and are defining the long-term plan that the foundation will use, e.g., for hiring and organizational build-out. They have the following traits and patterns:

  1. Their scope is years, not weeks or months as in previous NNS motions
  2. They have a broad direction but are active areas of R&D so they do not have an obvious line of execution.
  3. They involve deep research in cryptography, networking, distributed systems, language, virtual machines, operating systems.
  4. They are meant to match the strengths of where the DFINITY foundation’s expertise is best suited.
  5. Work on these proposals will not start immediately.
  6. There will be many follow-up discussions and proposals on each topic when work is underway and smaller milestones and tasks get defined.

An example may be the R&D for “Scalability” where there will be a team investigating and improving the scalability of the IC at various stages. Different bottlenecks will surface and different goals will be met.

3. How this R&D proposal is similar to what we have seen

We want to double down on the behaviors we think have worked well. These include:

  1. Publicly identifying owners of subject areas to engage and discuss their thinking with the community
  2. Providing periodic updates to the community as things evolve, milestones reached, proposals are needed, etc…
  3. Presenting more and more R&D thinking early and openly.

This has worked well for the last 6 months so we want to repeat this pattern.

4. Next Steps

Developer forum intro posted
1-pager from the discussion lead posted
NNS Motion proposal submitted

5. What we are asking the community

  • Ask questions
  • Read 1-pager
  • Give feedback
  • Vote on the motion proposal

Frankly, we do not expect many nitty-gritty details because these are meant to address projects that go on for long time horizons.

The DFINITY foundation’s only goal is to improve the adoption of the IC so we want to sanity-check the projects we see necessary for growing the IC by having you (the ICP community) tell us what you all think of these active R&D threads we have.

6. What this means for the existing Roadmap or Projects

In terms of the current roadmap and proposals executed, those are still being worked on and have priority.

An intellectually honest way to look at this long-term R&D project is to see them as the upstream or “primordial soup” from which more baked projects emerge from. With this lens, these proposals are akin to asking, “what kind of specialties or strengths do we want to make sure DFINITY foundation has built up?”

Most (if not all) projects that the DFINITY foundation has executed or is executing are borne from long-running R&D threads. Even when community feedback tells the foundation, “we need X” or “Y does not work”, it is typically the team with the most relevant R&D area that picks up the short-term feature or project.

5 Likes

Please note:

Some folks gave asked if they should vote to “reject” any of the Long Term R&D projects as a way to signal prioritization. The answer is simple: “No, please, ACCEPT” :wink:

These long-term R&D projects are the DFINITY’s foundation’s thesis at R&D threads it should have across years (3 years is the number we sometimes use internally). We are asking the community to ACCEPT (pending 1-pager and more community feedback of course). Prioritization can come at a separate step.

1 Like

Hi everybody, here is the initial proposal, comments are very welcome!

Privacy: Multi-party computation

Background: Since computation is replicated on many nodes, users risk one of the nodes leaking information. Cryptographic protocols for multi-party computation enable several nodes to jointly compute a function on confidential data without revealing the input or intermediate data. In theory, any function can be securely evaluated with MPC but users pay a significant performance overhead. This research will provide MPC functionality on the IC for users who need strong privacy guarantees.

Objective: Deploy multi-party computation techniques to enable computation on private data

Discussion leads: Victor Shoup, Jens Groth

Research questions:

  • How can users submit confidential data? Are there optimizations in bulk submission?
  • Are there distributed backup mechanisms that enable an MPC protocol to hold long-term confidential state at reasonable cost?
  • What is the best way to integrate MPC functionality and the rest of the IC?
  • Which language should developers use to specify their MPC functionality, and how can such a specification be realized with general purpose MPC protocols?
  • Can high importance tasks, e.g. key management, be solved efficiently with special purpose MPC?
  • How can developers specify certain data to be confidential and how will the interaction with public data work? How can users verify an MPC protocol will keep data confidential?
  • Are there additional features, e.g., differential privacy that can complement MPC functionality?
  • Do the MPC protocols offer post-quantum confidentiality? (see post-quantum initiative)

Related work and initiatives: The IC already uses a few MPC protocols, e.g., threshold BLS signatures. As part of Bitcoin integration, the Foundation has developed a threshold ECDSA signing protocol. Functionality used in the threshold ECDSA protocol could be used more broadly to do MPC relating to cryptographic operations in cyclic groups. The Security Proofs and Post-Quantum Security initiatives are also closely aligned.

Expertise and skills: Cryptography to construct MPC protocols and prove they are secure, PL for domain specific MPC-friendly languages, perhaps Coding Theory for storage solutions. The Research team (chain key technology, formal security), Crypto Library team, and Consensus team will be among the contributors.

How the community can contribute: The project is suitable for collaboration with researchers in academia and elsewhere to find solutions to the research questions. Community input will be helpful on e.g. which type of confidential data are more common and the confidentiality requirements for processing them, e.g., GDPR compliance concerns.

What we are asking the community:

  • Review comments, ask questions, give feedback
  • Vote accept or reject on NNS Motion
  • Participate in technical discussions as the motion moves forward
8 Likes

My limited understanding of MPC is that it makes a node’s data private from another node. Doesn’t that somewhat conflict with blockchain, whose entire purpose is to perform the same computation on multiple nodes? How would consensus be reached when each node is processing only one part of the input?

Also, I wonder if the AMD SEV initiative is still ongoing, or has hardware-enforced privacy like SEV been deprioritized in favor of software-enforced privacy like MPC?

EDIT: Just noticed there is also this long-term proposal.

3 Likes

SEV and MPC are complementary research paths and both are relevant.
SEV runs at machine speed but relies on hardware for security. If a node provider/data centre is malicious and has physical access to the node and is sophisticated enough to break the hardware security, there could be a breach of confidentiality.
MPC is slower but no single node by itself is able to determine any confidential data (neither inputs, nor intermediary data, nor parts of the data because it is secret shared between the nodes). A supermajority of nodes have to collaborate to retrieve any data, so MPC protects confidentiality even if an attacker has full control over one of the nodes.
You’re right the execution model is different in MPC. Indeed since no node can know the confidential data, you cannot replicate computation. Instead cryptographic techniques are used to jointly compute and still have resilience against a minority of nodes potentially being malicious. Those techniques incur a significant overhead, hence the slower processing speed when using MPC.

7 Likes

Awesome answer, thanks.

3 Likes

Proposal is live: Internet Computer Network Status

2 Likes

In the process of Multi-Party Security Computing, how to solve the problem of balance between security and efficiency for different security levels of different data?

2 Likes