Announcing ICPP - A protocol for P2P privacy-enabled transactions on the Internet Computer

Introducing ICPP

A platform for P2P privacy-enabled transactions on the IC

Before anything, Happy Christmas to everyone in the community! It seems timely, in some strange way, that the ICPP platform is open for business now, a bit of a gift of Santa.

This work has taken much longer than expected, let alone planned… and deploying has been a Royal PITA. In addition, it became apparent that the original user interaction model was not quite where I wanted it to be (given the application context) so, after careful thought, I decided to make a turn and implement a more dapp-like approach. It perhaps introduces some slight friction, depending on how you look at it, and forced some retesting.

But we’re now here. I’m therefore quite happy (at last..!) to introduce ICPP, a new protocol for privacy-enabled P2P transactions on the IC. It’s not a new token, nor a wrapped derivative. ICPP provides both the cryptography and architecture to enable anyone on the IC network to make P2P transfers of vanilla ICP tokens.

You can access a non-technical presentation deck and brief user guide through this link. A denser and maths-heavy technical paper will be soon available once I finish its revision, and, all going well, I will be likely publish it on ArXiV for quicker dissemination. A separate version of that same paper will be sent for consideration to a top-tier Journal in the field (e.g., Springer’s Journal of Cryptographic Engineering to “complement” my 2022 co-authored paper, as technologies presented in that piece are used on ICPP).

The platform can be easily used through a dashboard deployed to mainnet and accessed by through http://icpp.tech. The site is fully hosted on the Internet Computer.

It was designed to provide users with an easy, straightforward mechanism to access ICPP for sending and accepting ICP transfers. Any ICP received via ICPP are available for use directly and almost immediately, as any other ICP token. Because they are like any other ICP token.

An insane amounts of man-hours have been put behind ICPP, not just on writing the code (in Motoko, Rust and TypeScript/JS for the dashboard) but testing. Despite the development hard work on a local NNS replica, ICPP’s behavior on the mainnet needs to be yet fully assessed.

Therefore, I would strongly encourage any prospective user to start with low transactions initially (but DO NOT self-send as you have to provide a Principal ID as destination from a dashboard linked to that same Principal ID; that round-trip may lead to unpredictable results). The easier way is to create a separate Internet Identity and use its Principal as target.

  • Log-in to ICPP on one browser (say, Chrome) with one II
  • Log-in to ICPP on a separate browser (say, Edge) with the second II
  • Those, quite obviously, will be two separate IDs (instances)
  • Make a low-amount transfer between both Principals and check the net ICP sent has safely arrived to destination

If all proceeds normally (each leg usually demands 10-15 minutes to complete, maybe less, and sometimes longer) you can try larger amounts.

Now, fees for low-amount transactions are quite high, percentage-wise. Right now:

  • Send 10 ICP → Cost 3.80 ICP (38%)
  • Send 100 ICP → Cost 4.34 ICP (4.34%)
  • Send 1000 ICP → Cost 8.93 ICP (0.893%)
  • Send 2000 ICP → Cost 13.74 ICP (0.687%)

What is the practical consequence of this action? The ICP that land on your second account will be anonymized, in the sense that there will be no link between the sender (you in this case) and the receiver (also you). The real life equivalent is as if you get paid cash-in-hand.

There are some hardwired limitations, perhaps most noticeably the cap of 2,000 ICP tokens per transaction. The idea is to avoid ICPP being used in nefarious ways, or at least make it inconvenient. It also mirrors the same restrictions for usage of cash in many jurisdictions, so the choice is not purely arbitrary but an attempt to make transactions on ICPP mirror cash usage as its currently legislated. Nothing more, nothing else.

I will try to be available here to answer questions and try solve any issues. I have zero visibility on any ICP flows other than seeing the protocol being used, not by whom or between whom. I also have no ability to provide any information about any transaction to third parties. Whatever happens on ICPP is only known to the individuals using the platform to transact. A consequence of strict privacy enforcement.

Last but not least: for those who remember Zcash recent rise from being a technical curiosity (even an outlier) to where it is today, promoting ICPP would perhaps generate the attention, higher demand and upwards price push for ICP. I cannot do it alone, and should the ICPP tech stack prove its worth I hope the community rallies behind it. In one important sense, I believe ICPP brings privacy natively to ICP. That would make ICP itself more desirable, let alone embody in the ICP ecosystem the degree of user sovereignty and agency that can only be delivered through privacy. Satoshi’s core ideals, shall we say.

Finally, as you would expect one to say almost formulaically, the use of ICPP (as any other crypto or software platforms) entails risks. Consequently, by using the ICPP you implicitly accept doing so at your own risk, including, but not limited to, the potential loss of funds. Bear this in mind and use judiciously.

23 Likes

Dont really understand.
I send 10 ICP, after fee, 6.2 ICP land on my new wallet. But if people know input amount, they still can trace output wallet by known amount. If output is spammed by small amounts, than will be harder to find, but still can be traced by knowing transfer timeframe and amount.
Only way it can work is if ICPP input is held in pool and output wallet can draw tokens what is theoretically his (sender wallet is pool address, triggered by user PID, not seen), than cant trace output as long as user dont draw same amount or splits during longer timeframe.

Nope, Alice sends X ICP to Bob who gets 10 ICP (Alice pays costs). That’s for starters.
No transaction has Alice and Bob connected.
Alice pays indirectly into a pool, Bob draws indirectly from that pool. They do it through intermediaries.
Ingress is done in randomized chunks, egress is done in a single chunk.
Time is also operationally decorrelated. The ICP do not flow from Alice to Bob in a end-to-end, one shot process. Bob has to claim the amount. That’s because he needs to prove to an intermediary his right to claim a deposit of X into the pool.
That intermediary is the trusted agent that says “here is Bob who proved to me there is X ICP for him deposited in the pool” (without knowing who made the deposit, or when).
The intermediaries are ephemeral and get destroyed in sequence. Once Alice seals her end of the transaction, the intermediary pushing it through to the pool is destroyed. Once Bob gets the ICP at his end, the intermediaries servicing him are also destroyed.
So an observer sees Alice paid into a privacy protocol and Bob received from a mixer. The intermediate state that connected them is destroyed. Hence the cryptographic linkage that proved Bob’s right to claim X ICP is gone with the canisters.

2 Likes

Very interesting, you mentioned cryptography, are you using zk and secrets with a pool structure?

Edit just saw the post above, surely there is some form of zk and nullifiers, very interesting, how is compute with that?

1 Like

Nope, the methodology does not involve ZK proofs. It uses Functional Encryption and RDMPF (Rank-Deficient Matrix Product Functions).
ICPP effectively implements a dual-intermediary dead-drop delivery with witnessed teardown.
No ephemeral canister can link Alice to Bob: one only observes the deposit, the other observes retrieval, and the pool never sees Alice or Bob directly.
In fact, Bob does not ever learn who sent the ICP to him. He can only know it was Alice through a side-channel (Alice giving him a ring to say “hey, I sent you X ICP” or knowing that Alice had to pay him X ICP).
And yes, all is effectively scrubbed.

3 Likes

This is actually very innovative, really good job. Will definitely try it out. I’m thinking..can it be applied cross chain, like some wants to send XRP, they can transfer into ICP, gets mixed, ICP into XRP and someone on another chains claims it?

2 Likes

Start small, so far so good. I did test with real ICP and it worked (as it worked during the off-grid testing phase, but then it was with fake ICP) but the protocol needs to face real-life operational reality, so to say.
On the cryptography I will publish a formal paper; it’s written but needs revision and over the past month my work was all implementational.

1 Like

as a fellow economist, glad you are not doing econometrics stata or R, looking forward to see it published, will try small and give you feedback, if you like early readers I know the team here would love to read it

1 Like

Both mathematician and economist (and I would say, macroeconometrics) so it speaks to my other (dark?) side :grin:
I also designed a project that’s ongoing, and for which I completed an MVP under a grant from the DFINITY Foundation, involving “stabilized” crypto. That’s for a separate thread and a major project indeed.
Actually, ICPP is the testbed for technologies I will be incorporating in Nebula, but as initiatives are separate.

2 Likes

Love it :joy: is the Nebula project live or GitHub available? sorry for the ignorance, will definitely check it out but yah it does integrate very well with a stable.

Not live yet, there is a GitHub but still private (only the Foundation can access). We are at the point in which we proved it can work.

Have a look at this link and this one for some information.

1 Like

Thinking about what you say, and allow me some rope here…
Say you have XRP to send to someone.

  • You convert it to ICP and sent that ICP to your IPCC account (which is an ICRC-1 account, effectively)
  • You send those ICP through the ICPP tunnel of sorts to another ICPP account.
  • When it reaches the other end you withdraw the ICP to a Principal you can operate from the NNS dashboard, say.
  • Then convert back to XRP

That has two public interactions (at both ends coming out and in XRP) and a shielded one, so to say, via ICPP. A bit too convoluted but if it’s die-hard XRP what you need that’s the only way.

1 Like

This is exactly the same flow I had in mind, asking because we are launching cross chain swaps with a portfolio company on ICP and it would be cool if we say allow other users in other chains to do this process with every chains’ tokens. We are doing that now but with a privacy token in collab with another network(in testing), but I think this would be cooler we don’t have to manage pools on 3 chains, 2 chains easier flow and loss of value

Edit: it could be anything, stables, Dai, solana whatever not necessarily xrp, we were thinking obviously of randomizing and batching transactions, a few stuff less than what you had but we depended on the privacy chain

2 Likes

One of the aims of ICPP was to avoid using a new token (privacy coin or whatever) which require its own “market” to operate and breathe. But a third party protocol could own a number of ICPP accounts, and perform operations automatically: that would need to be built but should not be an issue. So basically your endpoints are whatever crypto you want to offer to operate with, all done through a tunnelling (ICPP here) not involving any new token construct (so you avoid having to create a market for it: the liquidity is already there).

Assuming I’m on the right track here.

Yes 100% the solution would be ideal if it was on ICP, needless to say the project announcement time is perfect. We can even create new canisters with chain specific addresses controlled by people on the other chain to receive. I will dm you we can have a longer chat see what can be done.

1 Like

Anytime, happy to discuss. Yep, you can have X number of ICPP accounts (they are tied to an equal number of Internet Identities, so for every Internet Identity there is an ICPP account linked to it). Then you have two public endpoints and N private ones.

1 Like

Hey, some great tech here - thanks for sharing this.
I have a few questions.

A privacy protocol that naively hides transfers from users and developers, but still reveals them to nodes, could be built simply by black-holing a verifiable canister with a mixer + delay - no special math needed there.

This sounds like the part where the fancy math comes into play. Specifically: boundary nodes and replica nodes shouldn’t be able to deterministically figure out the Alice↔Bob link, even if they can record every canister transaction and read canister memory; and even if boundary nodes can see sender IPs/payloads (or public observers see hashed IP/canister/method metadata)

Boundary nodes run in a TEE if I recall correctly, but the privacy guarantees depend on how it’s used. If everything is inside the enclave and TLS terminates inside the enclave as well, then the host can’t see decrypted messages, which is better - but that also means some security depends on the TEE. If a node can see the messages (e.g., one creating a commitment and another withdrawing with a proof to take tokens from the pool), and both are tied to the same IP, then Alice↔Bob could still be linked through that.

You mentioned intermediary canisters. As far as I understand it, the FE + RDMPF is meant to allow “proofs against a set” rather than linking to a specific identity (unlike just using Hash(secret)). But if the set is small, it becomes easier to make connections. You also mention these canisters are deleted; however, if we assume nodes can record everything, deleting canisters doesn’t seem to improve anything. In fact, it sounds like it could lower security by making the effective commitment set smaller.

I’m not quite getting how it works from the description yet, but it’s quite interesting. I’d be happy to read the paper when it comes out.

2 Likes

This is very cool.

In laymens terms:

Every transaction ends up spawning a bunch of canisters, which are then destroyed after passing the ICP around randomly mixing it with everyone else and sending it out the other end?

Is that the gist of it?

Thanks for your very sharp comments. I’ll answer a bit broader later but, for starters, there are two facets to think about here: the cryptographic and the operational. The difference is not subtle and usually gets blurred.

Let’s think about it this way:

  1. In Monero, nodes can log all ring signature transactions, IP addresses, timing patterns. If you assume validators record everything and collude, privacy breaks. That is: in a scenario where an attacker controls a vast majority of nodes and they all collude, privacy is significantly degraded but not entirely broken. Monero has implemented strategies such as Dandelion++, Tor integration, and use of decoys, so although a Sybil attack (collusion) coupled to traffic analysis is still possible it’s more of a theoretical scenario.

  2. In Zcash, like Monero, if an adversary controls enough nodes they can use statistical models to guess the origin IP based on which node saw the transaction first. Validator logging breaks the network-level privacy and even if the cryptography is unbreakable (arguably, Zcash’s math is stronger" than Monero’s) the behavioral patterns (timing, IP, and pool-hopping) allow a colluding adversary to degrade privacy significantly. Orchard (Halo2) has tightened some screws in relation to arity hiding through the use of “unified actions” though.

  3. In Tornado Cash, while it offers high-grade cryptographic privacy on the ledger it is highly susceptible to network-level deanonymization: relayers see deposit/withdrawal requests with IP addresses, and Ethereum nodes see all transactions. If the infrastructure colludes, linkage is possible.

  4. Consider Tor now. Exit nodes see destinations, directory authorities see topology. If enough nodes are malicious and colluding, privacy goes out of the window. In fact, I would argue the most dangerous form of collusion in Tor is not just between nodes but between the Entry (Guard) Node and the Exit Node. That remains a real attack vector, which is compounded by Tor’s “low latency” features to make browsing fast, but leaving enough packet fingerprinting to work with.

  5. Signal is also vulnerable to network infrastructure collusion, meaning (in this specific case due to Signal’s use of a different trust model, by virtue of being a centralized service) an attacker monitoring the ISP/Internet backbone and the Signal server simultaneously. That said, ICPP uses a similar approach to Signal’s Sealed Sender mechanics.

All of this goes to the essence (many times overlooked, hence nice that you surface it out) of this discussion: Every system assumes some level of honest behavior from infrastructure. The question isn’t whether the assumption exists, but whether it’s reasonable given the incentive structure. Therefore, to evaluate a new privacy tool asking whether the math is good, however pertinent, is perhaps less relevant than asking:

  • Who runs the hardware?
  • What happens to them if they cheat?
  • Is the gain by cheating more valuable than what they lose?

And then ask: Is ICP’s trust model reasonable? While you cannot avoid the “honest majority” assumption (it still exists, you assume less than a third are malicious) unlike Signal (where you trust one entity) or a Tor Exit node (where you trust one volunteer) ICP forces the infrastructure to prove its honesty to its peers every few seconds through consensus before any data is moved. That said, the boundary node on ICP is the infrastructure component that most closely resembles the “metadata-logging” risk discussed earlier (and an issue DFINITY is actively addressing).

For ICPP all of this means that you need:

  • Multiple subnet nodes from different operators to collude
  • Across different canisters
  • That are placed on different subnets
  • All simultaneously recording and sharing state
  • And correlating across time windows

noting, in passing, that such time window is bounded by design, due to canister ephemerality.

In terms of the above conversation, then, the implicit admission in ICPP is that infrastructure is a liability (not ICP, any infrastructure) and that the only way to protect a user is to destroy the infrastructure as soon as the job is done. It not just about encrypting the data, but deleting the very “scene of the crime” where the metadata was created, so to say.

What does ICPP implement?

a. It attempts to break amount correlation by chunking: on ICP’s ledger, there will never be inbound transactions that would ever match the outbound ones (inbounds are randomly chunked, outbounds are not).

b. It uses a “pull” model: Because the protocol uses ephemeral intermediaries the deposit and the claim are decoupled. Alice “dead-drops” an encrypted capsule into two storage canisters. Bob doesn’t even know they are there until he scans the noticeboard. It asks for the capsule and extracts (only Bob, everyone could have it but only Bob can decrypt it) a certificate that proves he is the legitimate beneficiary of X ICP from the pool. The intermediary checks that is correct, and issues an order for X to be paid to Bob. Then it requests its “parent” (the Factory canister) to execute him and his brothers (he cannot do it himself because of being blackholed, as all of the intermediaries). Factory kills them all.

c. By using a “blind authenticator” between the storage canisters and the Router (who is a dumb vault) the idea was to effectively create a one-time-use cryptographic air-gap. The Router never sees the RDMPF secret and never sees the storage canister: It only sees a “go” signal from a temporary agent with verified code hash and a certified origin from the Factory. Translated: The link between the “deposit” (Alice’s side) and the “Claim” (Bob’s side) is a “ghost” that only exists in the transient RAM of this intermediary for a very short period of time (milliseconds).

d. The “teardown” (ephemerality) provides, let’s say, the final cryptographic proof of the transaction. The Witness only issues the finalize record once it has collected the evidence that all other intermediaries have been destroyed and is immediately killed afterwards. The objective? To prevent retroactive de-anonymization. The data is physically deleted from the subnet state, meaning there is no “data at rest” to be decrypted in the future.

e. Consequence? Quantum attacks (still very much a theoretical possibility rather than a real threat, but it sells…) have nothing to work on.

Hope the above is clear enough; I attempted to be as didactic as possible for the benefit of the community at large. But great questions and a big thank you. The paper does not address the implementational angle at length but the cryptographic one.

5 Likes

So here’s my question. which is not nearly as thoughtful as infus but I’m not that smart.

Suppose I know BOB, but not Alice wouldn’t the complexity of figuring out Alice depend on the number of people using this? IE if there are few users it wouldn’t be difficult? Or am I wrong about that?

If I know BoB and Alice does this still work?