Private bridge protocol design rationale, constraints, and brainstorming

In a world with independent privacy sets (shielded pools) on independent domains (applications or chains), privacy is competitive: users must choose privacy set A or B (along with the security assumptions of domain A or B), and one more user in the privacy set of A means one less user in the privacy set of B, since users must choose either/or. Such a world would be a bit of a tragedy, since privacy is a good which gets better the more people share it - users of A and B would both get more privacy if A and B could somehow share a privacy set. In a world with a shared privacy set, privacy is collaborative: one more user in the privacy set of A means one more user (or almost) in the privacy set of B - the bigger the shared privacy set, the more privacy everyone has. Yet, here we also want to preserve autonomy: users, presumably, chose A or B for reasons of their own - maybe A and B are controlled by different (only partially overlapping) communities, have different assets, different rules, etc. Naively, sharing a privacy set requires sharing a domain - which is precisely what we want to avoid requiring - so the design question then becomes: can we design the protocols such that separate domains can share privacy but preserve autonomy and preserve integrity? That is the (interesting) question of private bridge design.

Let’s start from a concrete construction: IBC. IBC is an autonomy-preserving and integrity-preserving blockchain interoperability protocol, but it does not (yet) preserve privacy. As a refresher, here’s how IBC works (slightly simplified, but architecturally accurate):

Consider a simple token transfer of N tokens of type T from chain A to chain B:

  1. The user initiates the transfer by sending a transaction to chain A indicating that they wish to transfer N tokens of type T to chain B.
  2. When this transaction is executed on chain A, N tokens of type T are escrowed, and a packet commitment is written to the state of A that includes N, T, and the recipient address on chain B.
  3. Chain B, which is running a light client for chain A, reads this packet from chain A’s state and uses the light client to check that it was committed by chain A’s consensus. Chain B then mints N T-vouchers and sends them to the recipient address, which can then use them freely on chain B (or send them elsewhere).

Then, at some future point, when some user wants to send these T-vouchers back to chain A:

  1. The user initiates the transfer by sending a transaction to chain B indicating that they wish to transfer N T-vouchers back to chain A (which will then become T).
  2. When this transaction is executed on chain B, N T-voucher tokens are burned, and a packet commitment is written to the state of A that includes N, T, and the recipient address on chain A.
  3. Chain A, which is running a light client for chain B, reads this packet from chain B’s state and uses the light client to check that it was committed by chain B’s consensus. Chain A first checks that chain B hasn’t already claimed the N tokens of type T. Chain A then unescrows N tokens of type T and sends them to the recipient address.

Take note of this check in bold. IBC preserves both autonomy and integrity because it checks, upon the return of tokens, that the sender isn’t trying to return the same tokens twice (this is basically double-spend prevention, but for other blockchains). This way, even if chain B is Byzantine and tries to double-spend, users on chain A of token N who didn’t send it to B will be completely unaffected (no supply inflation).

Suppose that we try to make this protocol private. The first part of this is pretty simple. Instead of using transparent account systems, chains A and B now use shielded state transition systems (a la Zcash/Namada/ZEXE/Taiga etc.). When escrowing N T tokens or burning N T-vouchers, this is done in a ZKP, with a proof sent to the other side of the bridge which it can verify. This works just fine and provides full privacy.

The challenge comes when we try to implement the double-spend check, because this is a double-spend check at the level of the blockchains themselves, not the individual users. In these shielded state transition systems, privacy is achieved by the state (notes) of each user being known only to them (and they make the proofs). Chain A, in this example, wants to check that all the users of B in total have not already sent back the N tokens - so, if the bridge transfers are themselves private, this requires computation over the private state of multiple users.

One construction which is at least architecturally workable here is to use threshold FHE (with an interface as described e.g. in this paper). With threshold FHE and threshold decryption (so a system like Ferveo would play a part), the amount of the transfer could be encrypted to the threshold key, and each block validators of A would (in FHE) add together all the incoming transfers of each token T from B, subtract the sum from the current outstanding balance of T held by B, and test whether the new outstanding balance is still non-negative, then threshold decrypt the result - and only finalise the incoming transfers if the result is 1. This provides all of privacy, autonomy, and integrity, but it does require some heavy-duty cryptography, and more efficient specialised solutions are likely possible - any ideas?

(additional research notes here and here)

4 Likes

I’ll emphasize again that the central challenge is really is primarily one of consensus which is difficult to solve in a privacy preserving way without more advanced cryptographic tools. Fundamentally, chain A is allowing chain B to mutate some state on chain A, and chain A needs to ensure that any invariants (e.g. token supply) remain preserved. IBC accomplishes this with “escrow” that limits the damage that can happen if chain B fails in some way.

For example, if we assume chain A and chain B are fully synchronized (e.g. validators run full nodes of the other) then avoiding the escrow question entirely and simply checking that the invariants are preserved on chain B directly, step 3 of the “IBC” could be done in a privacy preserving way. If the shielded state transition system is the same on chain A and chain B, then after steps 1 and 2, chain A (or B) receives confirmation that chain B (or A) has successfully completed its IBC steps, then chain A simply verifies the state transitions of chain B. This only works because the entire token supply of chain A is “at risk” if chain A doesn’t fully verify B. Of course, this defeats the whole purpose of IBC since no scaling benefit is achieved and also chain B is fairly restricted.

We can hypothesize some different ways of accomplishing similar things. For example, chain A/B needs to verify the final state of B/A, theoretically allowing some kind of rollup or recursive proof of chain B’s state. Of course, careful analysis is required because chain A also needs to meta-verify that chain B’s state transitions always preserve A’s invariants, which is probably difficult to do in general.

So let’s return to the escrow idea. We substituted light clients for full synchronization, and instead protected chain A with escrow, which is a shared mutable state (thus creating the cryptographic difficulty). A natural attack would be to use MPC, and indeed MPC is arguably much more mature and efficient than FHE based techniques. The MPC model is rather unfavorable because on the real internet participants can be slow, faulty, honest-but-curious, adversarial, etc. and a potentially high number of MPC rounds presents its own challenges. In a world where single-party computation is relatively cheap compared to interactivity these tradeoffs matter a lot.

To finish, I’d like to consider some additional crazy ideas for escrowing. For example, when bridging a 64-bit amount of token T from chain A to B, the bits encoding value are committed in 64 different commitments on chain A, effectively escrowing each bit independently. Then each tx on chain B must (verifiably) forward the trapdoor randomness to the next user, and unescrowing on chain A requires the trapdoor. At minimum this is safe for chain A, and assuming consistency of chain B, prevents double-unbridging (each user on chain B can unbridge to A only the amount they control on B)

So what’s the problem with this? Well, it suffers the “faerie gold” attack - a user on chain B, having held some token for a while, can deny future holders of that token the ability to unbridge back to A - the attacker unbridges some other balance of the same token using the first trapdoor escrow, using the fungibility of the token. So chain B has to cryptographically link an unbridging to the specific trapdoor used (which seems reasonably possible)

This also has some privacy leakage, because anyone who ever sees the trapdoor can now tell when that trapdoor is used to unbridge some of this token in the future. This could possibly be mitigated by occasionally “refreshing” the trapdoors.