Efficient Privacy-Preserving Training of Quantum Neural Networks by Using Mixed States to Represent Input Data Ensembles
Gaoyuan Wang, Jonathan Warrell, Mark Gerstein
公開日: 2025/9/15
Abstract
Quantum neural networks (QNNs) are gaining increasing interest due to their potential to detect complex patterns in data by leveraging uniquely quantum phenomena. This makes them particularly promising for biomedical applications. In these applications and in other contexts, increasing statistical power often requires aggregating data from multiple participants. However, sharing data, especially sensitive information like personal genomic sequences, raises significant privacy concerns. Quantum federated learning offers a way to collaboratively train QNN models without exposing private data. However, it faces major limitations, including high communication overhead and the need to retrain models when the task is modified. To overcome these challenges, we propose a privacy-preserving QNN training scheme that utilizes mixed quantum states to encode ensembles of data. This approach allows for the secure sharing of statistical information while safeguarding individual data points. QNNs can be trained directly on these mixed states, eliminating the need to access raw data. Building on this foundation, we introduce protocols supporting multi-party collaborative QNN training applicable across diverse domains. Our approach enables secure QNN training with only a single round of communication per participant, provides high training speed and offers task generality, i.e., new analyses can be conducted without reacquiring information from participants. We present the theoretical foundation of our scheme's utility and privacy protections, which prevent the recovery of individual data points and resist membership inference attacks as measured by differential privacy. We then validate its effectiveness on three different datasets with a focus on genomic studies with an indication of how it can used in other domains without adaptation.