In a world with independent privacy sets (shielded pools) on independent domains (applications or chains), privacy is competitive: users must choose privacy set A or B (along with the security assumptions of domain A or B), and one more user in the privacy set of A means one less user in the privacy set of B, since users must choose either/or. Such a world would be a bit of a tragedy, since privacy is a good which gets better the more people share it - users of A and B would both get more privacy if A and B could somehow share a privacy set. In a world with a shared privacy set, privacy is collaborative: one more user in the privacy set of A means one more user (or almost) in the privacy set of B - the bigger the shared privacy set, the more privacy everyone has. Yet, here we also want to preserve autonomy: users, presumably, chose A or B for reasons of their own - maybe A and B are controlled by different (only partially overlapping) communities, have different assets, different rules, etc. Naively, sharing a privacy set requires sharing a domain - which is precisely what we want to avoid requiring - so the design question then becomes: can we design the protocols such that separate domains can share privacy but preserve autonomy and preserve integrity? That is the (interesting) question of private bridge design.
Let’s start from a concrete construction: IBC. IBC is an autonomy-preserving and integrity-preserving blockchain interoperability protocol, but it does not (yet) preserve privacy. As a refresher, here’s how IBC works (slightly simplified, but architecturally accurate):
Consider a simple token transfer of N tokens of type T from chain A to chain B:
- The user initiates the transfer by sending a transaction to chain A indicating that they wish to transfer N tokens of type T to chain B.
- When this transaction is executed on chain A, N tokens of type T are escrowed, and a packet commitment is written to the state of A that includes N, T, and the recipient address on chain B.
- Chain B, which is running a light client for chain A, reads this packet from chain A’s state and uses the light client to check that it was committed by chain A’s consensus. Chain B then mints N T-vouchers and sends them to the recipient address, which can then use them freely on chain B (or send them elsewhere).
Then, at some future point, when some user wants to send these T-vouchers back to chain A:
- The user initiates the transfer by sending a transaction to chain B indicating that they wish to transfer N T-vouchers back to chain A (which will then become T).
- When this transaction is executed on chain B, N T-voucher tokens are burned, and a packet commitment is written to the state of A that includes N, T, and the recipient address on chain A.
- Chain A, which is running a light client for chain B, reads this packet from chain B’s state and uses the light client to check that it was committed by chain B’s consensus. Chain A first checks that chain B hasn’t already claimed the N tokens of type T. Chain A then unescrows N tokens of type T and sends them to the recipient address.
Take note of this check in bold. IBC preserves both autonomy and integrity because it checks, upon the return of tokens, that the sender isn’t trying to return the same tokens twice (this is basically double-spend prevention, but for other blockchains). This way, even if chain B is Byzantine and tries to double-spend, users on chain A of token N who didn’t send it to B will be completely unaffected (no supply inflation).
Suppose that we try to make this protocol private. The first part of this is pretty simple. Instead of using transparent account systems, chains A and B now use shielded state transition systems (a la Zcash/Namada/ZEXE/Taiga etc.). When escrowing N T tokens or burning N T-vouchers, this is done in a ZKP, with a proof sent to the other side of the bridge which it can verify. This works just fine and provides full privacy.
The challenge comes when we try to implement the double-spend check, because this is a double-spend check at the level of the blockchains themselves, not the individual users. In these shielded state transition systems, privacy is achieved by the state (notes) of each user being known only to them (and they make the proofs). Chain A, in this example, wants to check that all the users of B in total have not already sent back the N tokens - so, if the bridge transfers are themselves private, this requires computation over the private state of multiple users.
One construction which is at least architecturally workable here is to use threshold FHE (with an interface as described e.g. in this paper). With threshold FHE and threshold decryption (so a system like Ferveo would play a part), the amount of the transfer could be encrypted to the threshold key, and each block validators of A would (in FHE) add together all the incoming transfers of each token T from B, subtract the sum from the current outstanding balance of T held by B, and test whether the new outstanding balance is still non-negative, then threshold decrypt the result - and only finalise the incoming transfers if the result is 1. This provides all of privacy, autonomy, and integrity, but it does require some heavy-duty cryptography, and more efficient specialised solutions are likely possible - any ideas?