Certified Robustness of Quantum Classifiers against Adversarial Examples
through Quantum Noise
- URL: http://arxiv.org/abs/2211.00887v2
- Date: Fri, 28 Apr 2023 05:32:38 GMT
- Title: Certified Robustness of Quantum Classifiers against Adversarial Examples
through Quantum Noise
- Authors: Jhih-Cing Huang, Yu-Lin Tsai, Chao-Han Huck Yang, Cheng-Fang Su,
Chia-Mu Yu, Pin-Yu Chen, Sy-Yen Kuo
- Abstract summary: We show that adding quantum random rotation noise can improve robustness in quantum classifiers against adversarial attacks.
We derive a certified robustness bound to enable quantum classifiers to defend against adversarial examples.
- Score: 68.1992787416233
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recently, quantum classifiers have been found to be vulnerable to adversarial
attacks, in which quantum classifiers are deceived by imperceptible noises,
leading to misclassification. In this paper, we propose the first theoretical
study demonstrating that adding quantum random rotation noise can improve
robustness in quantum classifiers against adversarial attacks. We link the
definition of differential privacy and show that the quantum classifier trained
with the natural presence of additive noise is differentially private. Finally,
we derive a certified robustness bound to enable quantum classifiers to defend
against adversarial examples, supported by experimental results simulated with
noises from IBM's 7-qubits device.
Related papers
- Adversarial Robustness Guarantees for Quantum Classifiers [0.4934360430803066]
We show that quantum properties of QML algorithms can confer fundamental protections against such attacks.
We leverage tools from many-body physics to identify the quantum sources of this protection.
arXiv Detail & Related papers (2024-05-16T18:00:01Z) - Quantum Adversarial Learning for Kernel Methods [0.0]
We show that hybrid quantum classifiers based on quantum kernel methods and support vector machines are vulnerable to adversarial attacks.
Simple defence strategies based on data augmentation with a few crafted perturbations can make the classifier robust against new attacks.
arXiv Detail & Related papers (2024-04-08T19:23:17Z) - Power Characterization of Noisy Quantum Kernels [52.47151453259434]
We show that noise may make quantum kernel methods to only have poor prediction capability, even when the generalization error is small.
We provide a crucial warning to employ noisy quantum kernel methods for quantum computation.
arXiv Detail & Related papers (2024-01-31T01:02:16Z) - Quantum Conformal Prediction for Reliable Uncertainty Quantification in
Quantum Machine Learning [47.991114317813555]
Quantum models implement implicit probabilistic predictors that produce multiple random decisions for each input through measurement shots.
This paper proposes to leverage such randomness to define prediction sets for both classification and regression that provably capture the uncertainty of the model.
arXiv Detail & Related papers (2023-04-06T22:05:21Z) - Enhancing Quantum Adversarial Robustness by Randomized Encodings [10.059889429655582]
We propose a scheme to protect quantum learning systems from adversarial attacks by randomly encoding the legitimate data samples.
We prove that both global and local random unitary encoders lead to exponentially vanishing gradients.
We show that random black-box quantum error correction encoders can protect quantum classifiers against local adversarial noises.
arXiv Detail & Related papers (2022-12-05T19:00:08Z) - Suppressing Amplitude Damping in Trapped Ions: Discrete Weak
Measurements for a Non-unitary Probabilistic Noise Filter [62.997667081978825]
We introduce a low-overhead protocol to reverse this degradation.
We present two trapped-ion schemes for the implementation of a non-unitary probabilistic filter against amplitude damping noise.
This filter can be understood as a protocol for single-copy quasi-distillation.
arXiv Detail & Related papers (2022-09-06T18:18:41Z) - Noisy Quantum Kernel Machines [58.09028887465797]
An emerging class of quantum learning machines is that based on the paradigm of quantum kernels.
We study how dissipation and decoherence affect their performance.
We show that decoherence and dissipation can be seen as an implicit regularization for the quantum kernel machines.
arXiv Detail & Related papers (2022-04-26T09:52:02Z) - Optimal Provable Robustness of Quantum Classification via Quantum
Hypothesis Testing [14.684867444153625]
Quantum machine learning models have the potential to offer speedups and better predictive accuracy compared to their classical counterparts.
These quantum algorithms, like their classical counterparts, have been shown to be vulnerable to input perturbations.
These can arise either from noisy implementations or, as a worst-case type of noise, adversarial attacks.
arXiv Detail & Related papers (2020-09-21T17:55:28Z) - Quantum noise protects quantum classifiers against adversaries [120.08771960032033]
Noise in quantum information processing is often viewed as a disruptive and difficult-to-avoid feature, especially in near-term quantum technologies.
We show that by taking advantage of depolarisation noise in quantum circuits for classification, a robustness bound against adversaries can be derived.
This is the first quantum protocol that can be used against the most general adversaries.
arXiv Detail & Related papers (2020-03-20T17:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.