Black-Box Auditing of Quantum Model: Lifted Differential Privacy with Quantum Canaries
- URL: http://arxiv.org/abs/2512.14388v1
- Date: Tue, 16 Dec 2025 13:26:41 GMT
- Title: Black-Box Auditing of Quantum Model: Lifted Differential Privacy with Quantum Canaries
- Authors: Baobao Song, Shiva Raj Pokhrel, Athanasios V. Vasilakos, Tianqing Zhu, Gang Li,
- Abstract summary: Quantum machine learning (QML) promises significant computational advantages, yet models trained on sensitive data risk memorizing individual records.<n>We introduce the first black-box privacy auditing framework for QML based on Lifted Quantum Differential Privacy.
- Score: 27.661393333662474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum machine learning (QML) promises significant computational advantages, yet models trained on sensitive data risk memorizing individual records, creating serious privacy vulnerabilities. While Quantum Differential Privacy (QDP) mechanisms provide theoretical worst-case guarantees, they critically lack empirical verification tools for deployed models. We introduce the first black-box privacy auditing framework for QML based on Lifted Quantum Differential Privacy, leveraging quantum canaries (strategically offset-encoded quantum states) to detect memorization and precisely quantify privacy leakage during training. Our framework establishes a rigorous mathematical connection between canary offset and trace distance bounds, deriving empirical lower bounds on privacy budget consumption that bridge the critical gap between theoretical guarantees and practical privacy verification. Comprehensive evaluations across both simulated and physical quantum hardware demonstrate our framework's effectiveness in measuring actual privacy loss in QML models, enabling robust privacy verification in QML systems.
Related papers
- Differentially Private Federated Quantum Learning via Quantum Noise [9.540961602976965]
Quantum federated learning (QFL) enables collaborative training of quantum machine learning (QML) models across distributed quantum devices without raw data exchange.<n>QFL remains vulnerable to adversarial attacks, where shared QML model updates can be exploited to undermine information privacy.<n>This paper explores a novel DP mechanism that harnesses quantum noise to safeguard quantum models throughout the QFL process.
arXiv Detail & Related papers (2025-08-27T22:56:16Z) - Minimal Quantum Reservoirs with Hamiltonian Encoding [72.27323884094953]
We investigate a minimal architecture for quantum reservoir computing based on Hamiltonian encoding.<n>This approach circumvents many of the experimental overheads typically associated with quantum machine learning.
arXiv Detail & Related papers (2025-05-28T16:50:05Z) - Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning [54.20871516148981]
We introduce the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM)<n>CEPAM achieves communication efficiency and privacy protection simultaneously.<n>We theoretically analyze the privacy guarantee of CEPAM and investigate the trade-offs among user privacy and accuracy of CEPAM.
arXiv Detail & Related papers (2025-01-21T11:16:05Z) - The Effect of Quantization in Federated Learning: A Rényi Differential Privacy Perspective [15.349042342071439]
Federated Learning (FL) is an emerging paradigm that holds great promise for privacy-preserving machine learning using distributed data.
To enhance privacy, FL can be combined with Differential Privacy (DP), which involves adding Gaussian noise to the model weights.
This research paper investigates the impact of quantization on privacy in FL systems.
arXiv Detail & Related papers (2024-05-16T13:50:46Z) - GQHAN: A Grover-inspired Quantum Hard Attention Network [53.96779043113156]
Grover-inspired Quantum Hard Attention Mechanism (GQHAM) is proposed.
GQHAN adeptly surmounts the non-differentiability hurdle, surpassing the efficacy of extant quantum soft self-attention mechanisms.
The proposal of GQHAN lays the foundation for future quantum computers to process large-scale data, and promotes the development of quantum computer vision.
arXiv Detail & Related papers (2024-01-25T11:11:16Z) - Harnessing Inherent Noises for Privacy Preservation in Quantum Machine
Learning [11.45148186874482]
We propose to harness inherent quantum noises to protect data privacy in quantum machine learning.
Especially, considering the Noisy Intermediate-Scale Quantum (NISQ) devices, we leverage the unavoidable shot noise and incoherent noise.
arXiv Detail & Related papers (2023-12-18T11:52:44Z) - Differential Privacy Preserving Quantum Computing via Projection Operator Measurements [15.024190374248088]
In classical computing, we can incorporate the concept of differential privacy (DP) to meet the standard of privacy preservation.
In the quantum computing scenario, researchers have extended classic DP to quantum differential privacy (QDP) by considering the quantum noise.
We show that shot noise can effectively provide privacy protection in quantum computing.
arXiv Detail & Related papers (2023-12-13T15:27:26Z) - Federated Quantum Machine Learning with Differential Privacy [9.755412365451985]
We present a successful implementation of privacy-preservation methods by performing the binary classification of the Cats vs Dogs dataset.
We show that federated differentially private training is a viable privacy preservation method for quantum machine learning on Noisy Intermediate-Scale Quantum (NISQ) devices.
arXiv Detail & Related papers (2023-10-10T19:52:37Z) - QKSAN: A Quantum Kernel Self-Attention Network [53.96779043113156]
A Quantum Kernel Self-Attention Mechanism (QKSAM) is introduced to combine the data representation merit of Quantum Kernel Methods (QKM) with the efficient information extraction capability of SAM.
A Quantum Kernel Self-Attention Network (QKSAN) framework is proposed based on QKSAM, which ingeniously incorporates the Deferred Measurement Principle (DMP) and conditional measurement techniques.
Four QKSAN sub-models are deployed on PennyLane and IBM Qiskit platforms to perform binary classification on MNIST and Fashion MNIST.
arXiv Detail & Related papers (2023-08-25T15:08:19Z) - Robust and efficient verification of graph states in blind
measurement-based quantum computation [52.70359447203418]
Blind quantum computation (BQC) is a secure quantum computation method that protects the privacy of clients.
It is crucial to verify whether the resource graph states are accurately prepared in the adversarial scenario.
Here, we propose a robust and efficient protocol for verifying arbitrary graph states with any prime local dimension.
arXiv Detail & Related papers (2023-05-18T06:24:45Z) - Quantum noise protects quantum classifiers against adversaries [120.08771960032033]
Noise in quantum information processing is often viewed as a disruptive and difficult-to-avoid feature, especially in near-term quantum technologies.
We show that by taking advantage of depolarisation noise in quantum circuits for classification, a robustness bound against adversaries can be derived.
This is the first quantum protocol that can be used against the most general adversaries.
arXiv Detail & Related papers (2020-03-20T17:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.