From Membership-Privacy Leakage to Quantum Machine Unlearning
- URL: http://arxiv.org/abs/2509.06086v2
- Date: Thu, 16 Oct 2025 05:29:36 GMT
- Title: From Membership-Privacy Leakage to Quantum Machine Unlearning
- Authors: Junjian Su, Runze He, Guanghui Li, Sujuan Qin, Zhimin He, Haozhen Situ, Fei Gao,
- Abstract summary: Quantum Machine Learning (QML) has the potential to achieve quantum advantage for specific tasks by combining quantum computation with classical Machine Learning (ML)<n>In classical ML, a significant challenge is membership privacy leakage, whereby an attacker can infer from model outputs whether specific data were used in training.<n>We investigate two research questions: do QML models leak membership privacy about their training data, and can MU methods efficiently mitigate such leakage in QML models?
- Score: 7.598623786321504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum Machine Learning (QML) has the potential to achieve quantum advantage for specific tasks by combining quantum computation with classical Machine Learning (ML). In classical ML, a significant challenge is membership privacy leakage, whereby an attacker can infer from model outputs whether specific data were used in training. When specific data are required to be withdrawn, removing their influence from the trained model becomes necessary. Machine Unlearning (MU) addresses this issue by enabling the model to forget the withdrawn data, thereby preventing membership privacy leakage. However, this leakage remains underexplored in QML. This raises two research questions: do QML models leak membership privacy about their training data, and can MU methods efficiently mitigate such leakage in QML models? We investigate these questions using two QNN architectures, a basic Quantum Neural Network (basic QNN) and a Hybrid QNN (HQNN), evaluated in noiseless simulations and on quantum hardware. For the first question, we design a Membership Inference Attack (MIA) tailored to QNN in a gray-box setting. Our experiments indicate clear evidence of leakage of membership privacy in both QNNs. For the second question, we propose a Quantum Machine Unlearning (QMU) framework, comprising three MU mechanisms. Experiments on two QNN architectures show that QMU removes the influence of the withdrawn data while preserving accuracy on retained data. A comparative analysis further characterizes the three MU mechanisms with respect to data dependence, computational cost, and robustness. Overall, this work provides a potential path towards privacy-preserving QML.
Related papers
- Quantum Quandaries: Unraveling Encoding Vulnerabilities in Quantum Neural Networks [2.348041867134616]
This work demonstrates that adversaries in quantum cloud environments can exploit white box access to QML models.<n>We report that 95% of the time, the encoding can be predicted correctly.<n>To mitigate this threat, we propose a transient obfuscation layer that masks encoding fingerprints.
arXiv Detail & Related papers (2025-02-03T16:21:16Z) - Adversarial Data Poisoning Attacks on Quantum Machine Learning in the NISQ Era [2.348041867134616]
A key concern in the Quantum Machine Learning (QML) domain is the threat of data poisoning attacks in the current quantum cloud setting.<n>In this work, we first propose a simple yet effective technique to measure intra-class encoder state similarity (ESS) by analyzing the outputs of encoding circuits.<n>Through extensive experiments conducted in both noiseless and noisy environments, we introduce a underlineQuantum underlineIndiscriminate underlineData Poisoning attack, QUID.
arXiv Detail & Related papers (2024-11-21T18:46:45Z) - Security Concerns in Quantum Machine Learning as a Service [2.348041867134616]
Quantum machine learning (QML) is a category of algorithms that employ variational quantum circuits (VQCs) to tackle machine learning tasks.
Recent discoveries have shown that QML models can effectively generalize from limited training data samples.
QML represents a hybrid model that utilizes both classical and quantum computing resources.
arXiv Detail & Related papers (2024-08-18T18:21:24Z) - Quantum Data Breach: Reusing Training Dataset by Untrusted Quantum Clouds [2.348041867134616]
We show that adversaries in quantum clouds can use white-box access of the QML model during training to extract the labels.
The extracted training data can be reused for training a clone model or sold for profit.
We propose a suite of techniques to prune and fix the incorrect labels.
arXiv Detail & Related papers (2024-07-19T22:06:34Z) - The Quantum Imitation Game: Reverse Engineering of Quantum Machine Learning Models [2.348041867134616]
Quantum Machine Learning (QML) amalgamates quantum computing paradigms with machine learning models.
With the expansion of numerous third-party vendors in the Noisy Intermediate-Scale Quantum (NISQ) era of quantum computing, the security of QML models is of prime importance.
We assume the untrusted quantum cloud provider is an adversary having white-box access to the transpiled user-designed trained QML model during inference.
arXiv Detail & Related papers (2024-07-09T21:35:19Z) - LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit [55.73370804397226]
Quantization, a key compression technique, can effectively mitigate these demands by compressing and accelerating large language models.
We present LLMC, a plug-and-play compression toolkit, to fairly and systematically explore the impact of quantization.
Powered by this versatile toolkit, our benchmark covers three key aspects: calibration data, algorithms (three strategies), and data formats.
arXiv Detail & Related papers (2024-05-09T11:49:05Z) - Predominant Aspects on Security for Quantum Machine Learning: Literature Review [0.0]
Quantum Machine Learning (QML) has emerged as a promising intersection of quantum computing and classical machine learning.
This paper discusses the question which security concerns and strengths are connected to QML by means of a systematic literature review.
arXiv Detail & Related papers (2024-01-15T15:35:43Z) - QKSAN: A Quantum Kernel Self-Attention Network [53.96779043113156]
A Quantum Kernel Self-Attention Mechanism (QKSAM) is introduced to combine the data representation merit of Quantum Kernel Methods (QKM) with the efficient information extraction capability of SAM.
A Quantum Kernel Self-Attention Network (QKSAN) framework is proposed based on QKSAM, which ingeniously incorporates the Deferred Measurement Principle (DMP) and conditional measurement techniques.
Four QKSAN sub-models are deployed on PennyLane and IBM Qiskit platforms to perform binary classification on MNIST and Fashion MNIST.
arXiv Detail & Related papers (2023-08-25T15:08:19Z) - Quantum Imitation Learning [74.15588381240795]
We propose quantum imitation learning (QIL) with a hope to utilize quantum advantage to speed up IL.
We develop two QIL algorithms, quantum behavioural cloning (Q-BC) and quantum generative adversarial imitation learning (Q-GAIL)
Experiment results demonstrate that both Q-BC and Q-GAIL can achieve comparable performance compared to classical counterparts.
arXiv Detail & Related papers (2023-04-04T12:47:35Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - QSAN: A Near-term Achievable Quantum Self-Attention Network [73.15524926159702]
Self-Attention Mechanism (SAM) is good at capturing the internal connections of features.
A novel Quantum Self-Attention Network (QSAN) is proposed for image classification tasks on near-term quantum devices.
arXiv Detail & Related papers (2022-07-14T12:22:51Z) - Towards Efficient Post-training Quantization of Pre-trained Language
Models [85.68317334241287]
We study post-training quantization(PTQ) of PLMs, and propose module-wise quantization error minimization(MREM), an efficient solution to mitigate these issues.
Experiments on GLUE and SQuAD benchmarks show that our proposed PTQ solution not only performs close to QAT, but also enjoys significant reductions in training time, memory overhead, and data consumption.
arXiv Detail & Related papers (2021-09-30T12:50:06Z) - The dilemma of quantum neural networks [63.82713636522488]
We show that quantum neural networks (QNNs) fail to provide any benefit over classical learning models.
QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets.
These results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
arXiv Detail & Related papers (2021-06-09T10:41:47Z) - Quantum machine learning with differential privacy [3.2442879131520126]
We develop a hybrid quantum-classical model that is trained to preserve privacy using differentially private optimization algorithm.
Experiments demonstrate that differentially private QML can protect user-sensitive information without diminishing model accuracy.
arXiv Detail & Related papers (2021-03-10T18:06:15Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.