Oblivious Message Retrieval

Below I give an overview of Oblivious message retrieval (OMR), why it is important, and how we can adapt it for private blockchains. The intention is to lay out open questions for others to chime in.

OMR overview
Let’s consider a scenario in which Alice needs to send messages to her friend Bob. The key requirement is neither Bob nor Alice should be aware of when they both will be online to exchange messages. This means the communication must be asynchronous. This introduces the need of a server via which messages are exchanged to handle asynchronicity. The server stores messages addressed to Bob on his behalf and forwards it to him when he comes online. This forces the need for aiding messages with at-least two pieces of metadata, recipient and sender details. Recipient details are required for server to identify to whom to forward received messages and sender details are usually required for DoS resistance. However, it is possible to eliminate including sender details using Sealed sender.

Since the server learns, due to metadata, which two users communicate, when and how often they communicate, neither privacy nor anonymity is preserved. Oblivious message retrieval removes the need of aiding messages sent via server with metadata without any loss of functionality on server’s side. Key change in OMR is that it replaces metadata with clues, where clues cannot be differentiated from randomly generated data. Server performs homomorphic operations on attached clues to generate message digests for each user consisting of their respective messages. I should note that the server never learns which messages from set of all messages are packed in a given user digest.

Private Blockchains
In private blockchains (such as Namada, ZCash, Aztec, Penumbra, etc.) transactions are encrypted by default and must not be aided with metadata. This forces users to download all transactions and trial decrypt them to find their pertaining transactions. However, this isn’t friendly to users on light devices and is far worse experience compared with transparent blockchains. Thus it is important to enable users to find pertaining transactions as easily as they can on transparent blockchains without sacrificing privacy.

OMR naturally fits as a solution to the problem. All transactions broadcasted to the network are embedded with clues. The server tracks all broadcasted transaction+clue sets and processes them to generate message digests for users. User can retrieve their digest from server to find their pertaining transactions.

Components of OMR:
For clarity, following are necessary parts of OMR:

  1. Server: Tracks all transaction+clue combinations broadcasted to the network.
  2. User: Depends on the server to retrieve their pertaining transactions.
  3. Detection key: User uploads detection key to server to enable processing of transaction+clue combinations to generate message digests.
  4. Public clue key: Sender of transaction uses intended recipient’s public clue key to generate the clue before broadcasting to the network.
  5. Clue: Before broadcasting transaction, it must be embedded with clue generated using intended recipient’s public clue key.
  6. Message digest: User retrieves message digest from server to find their pertaining transactions.

I should note:

  1. Detection keys, Public clue keys, and Clues are un-linkable. This means one can’t lead to another, as well as two key instances performing same functionality cannot be linked.
  2. Server never learns which of the messages from set of all messages are present inside user’s message digest.

Points for adoption and questions:

  1. Detection key: Detection key size is 130Mb. User needs to upload this only once to the server. To remain anonymous user may choose to upload it using a mixer network (ex, NYM). However, detection key cannot be derived from private keys usually used in blockchains. It requires maintaining an additional secret key. I don’t expect this to be an issue, but curious whether anyone thinks it can be a hurdle?
  2. Clues: Clue size is 960 bytes. Since they must be embedded with transactions, I am curious how will it affect network load ? Will it be a bad idea to have additional 960 bytes of data with each transaction ?
  3. Public clue key: Public clue key size is 135Kb. Since the sender needs recipient’s public clue key to generate clue, they must retrieve it before sending transaction. Public clue keys can be shared by uploading them to unique URLs (or even IPFS). However if the sender retrieves clue key right before broadcasting a transaction then they risk association of retrieval with broadcasting of transaction. To avoid this, one can use mixer network for retrieving public clue keys. However, I am unsure of how to verify whether retrieved public clue keys are correct. Any thoughts ?
  4. Performance: On x86 (single thread) my prototype takes 155ms/message. Paper’s prototype takes around 80ms/message. The difference in performance is due to fhe library used (I am using this library and paper uses SEAL). Comparing performance of operations required for OMR, SEAL (with hardware acceleration enabled on x86 using hexl) performs better on most operations. But on apple-m1 both libraries have comparable performance.
    Moreover, I think it is possible to improve upon 80ms/message. I compared performance of SEAL with OpenFHE and found that OpenFHE performs better (links to SEAL and OpenFHE benchmarks). My benchmarks could be wrong, but I think the difference is due to algorithmic differences between the two libraries. The plan I have in mind is to pick implementations of operations required in OMR from OpenFHE, and reimplement them in Rust. Let me know, if this interests you.
  5. Server: Validators, in case of Namada, are obvious candidates to take the role of servers. User can select their own servers and compensate them privately/anonymously for their service. Since OMR requires processing all messages for each user, distributing users across several servers is better. However, how user selects a server is still an open question. What do you think?
    Looking a bit ahead into future, OMR implementation on GPUs should give at-least an order of magnitude performance improvement. This means it might be better to separate servers and validators. Moreover, gradually with rise in traffic and improvement in OMR performance, there will be pressure to centralise OMR service to a few servers that can provide it for cheapest cost. I don’t think it’s bad, but we must ensure that users that can’t afford, or don’t use the blockchain as frequent, can still avail benefits of OMR. This includes having a foundation maintained server that provides degraded performance (for example, user message digest is only generated once per day compared to once per 10 minutes in paid service).

Hey! I wanted to update on the progress I have made in improving the performance of OMR over last few months.

I will briefly describe the improvements below and will direct you to the repository link for further details.

Performance of OMR has been improved to taking 20 minutes (down from 45 minutes) in total on single thread. Moreover, OMR is very paralllizable. For example increasing cores to 16 reduces the entire processing time to 110 seconds (or 84 seconds if we assume some precomputation). Adding more cores should linearly reduce the processing time.