Adversarial Robustness Guarantees for Quantum Classifiers
- URL: http://arxiv.org/abs/2405.10360v1
- Date: Thu, 16 May 2024 18:00:01 GMT
- Title: Adversarial Robustness Guarantees for Quantum Classifiers
- Authors: Neil Dowling, Maxwell T. West, Angus Southwell, Azar C. Nakhl, Martin Sevior, Muhammad Usman, Kavan Modi,
- Abstract summary: We show that quantum properties of QML algorithms can confer fundamental protections against such attacks.
We leverage tools from many-body physics to identify the quantum sources of this protection.
- Score: 0.4934360430803066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite their ever more widespread deployment throughout society, machine learning algorithms remain critically vulnerable to being spoofed by subtle adversarial tampering with their input data. The prospect of near-term quantum computers being capable of running {quantum machine learning} (QML) algorithms has therefore generated intense interest in their adversarial vulnerability. Here we show that quantum properties of QML algorithms can confer fundamental protections against such attacks, in certain scenarios guaranteeing robustness against classically-armed adversaries. We leverage tools from many-body physics to identify the quantum sources of this protection. Our results offer a theoretical underpinning of recent evidence which suggest quantum advantages in the search for adversarial robustness. In particular, we prove that quantum classifiers are: (i) protected against weak perturbations of data drawn from the trained distribution, (ii) protected against local attacks if they are insufficiently scrambling, and (iii) protected against universal adversarial attacks if they are sufficiently quantum chaotic. Our analytic results are supported by numerical evidence demonstrating the applicability of our theorems and the resulting robustness of a quantum classifier in practice. This line of inquiry constitutes a concrete pathway to advantage in QML, orthogonal to the usually sought improvements in model speed or accuracy.
Related papers
- Unveiling Hidden Vulnerabilities in Quantum Systems by Expanding Attack Vectors through Heisenberg's Uncertainty Principle [0.0]
This study uncovers novel vulnerabilities within Quantum Key Distribution (QKD) protocols.
The newly identified vulnerabilities arise from the complex interaction between Bell Inequalities (BIs) and Hidden Variable Theories (HVTs)
arXiv Detail & Related papers (2024-09-27T06:18:36Z) - The curse of random quantum data [62.24825255497622]
We quantify the performances of quantum machine learning in the landscape of quantum data.
We find that the training efficiency and generalization capabilities in quantum machine learning will be exponentially suppressed with the increase in qubits.
Our findings apply to both the quantum kernel method and the large-width limit of quantum neural networks.
arXiv Detail & Related papers (2024-08-19T12:18:07Z) - Quantum Imitation Learning [74.15588381240795]
We propose quantum imitation learning (QIL) with a hope to utilize quantum advantage to speed up IL.
We develop two QIL algorithms, quantum behavioural cloning (Q-BC) and quantum generative adversarial imitation learning (Q-GAIL)
Experiment results demonstrate that both Q-BC and Q-GAIL can achieve comparable performance compared to classical counterparts.
arXiv Detail & Related papers (2023-04-04T12:47:35Z) - Enhancing Quantum Adversarial Robustness by Randomized Encodings [10.059889429655582]
We propose a scheme to protect quantum learning systems from adversarial attacks by randomly encoding the legitimate data samples.
We prove that both global and local random unitary encoders lead to exponentially vanishing gradients.
We show that random black-box quantum error correction encoders can protect quantum classifiers against local adversarial noises.
arXiv Detail & Related papers (2022-12-05T19:00:08Z) - Certified Robustness of Quantum Classifiers against Adversarial Examples
through Quantum Noise [68.1992787416233]
We show that adding quantum random rotation noise can improve robustness in quantum classifiers against adversarial attacks.
We derive a certified robustness bound to enable quantum classifiers to defend against adversarial examples.
arXiv Detail & Related papers (2022-11-02T05:17:04Z) - Experimental quantum adversarial learning with programmable
superconducting qubits [15.24718195264974]
We show the first experimental demonstration of quantum adversarial learning with programmable superconducting qubits.
Our results reveal experimentally a crucial vulnerability aspect of quantum learning systems under adversarial scenarios.
arXiv Detail & Related papers (2022-04-04T18:00:00Z) - Circuit Symmetry Verification Mitigates Quantum-Domain Impairments [69.33243249411113]
We propose circuit-oriented symmetry verification that are capable of verifying the commutativity of quantum circuits without the knowledge of the quantum state.
In particular, we propose the Fourier-temporal stabilizer (STS) technique, which generalizes the conventional quantum-domain formalism to circuit-oriented stabilizers.
arXiv Detail & Related papers (2021-12-27T21:15:35Z) - Defence against adversarial attacks using classical and quantum-enhanced
Boltzmann machines [64.62510681492994]
generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations.
We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset.
arXiv Detail & Related papers (2020-12-21T19:00:03Z) - Optimal Provable Robustness of Quantum Classification via Quantum
Hypothesis Testing [14.684867444153625]
Quantum machine learning models have the potential to offer speedups and better predictive accuracy compared to their classical counterparts.
These quantum algorithms, like their classical counterparts, have been shown to be vulnerable to input perturbations.
These can arise either from noisy implementations or, as a worst-case type of noise, adversarial attacks.
arXiv Detail & Related papers (2020-09-21T17:55:28Z) - Quantum noise protects quantum classifiers against adversaries [120.08771960032033]
Noise in quantum information processing is often viewed as a disruptive and difficult-to-avoid feature, especially in near-term quantum technologies.
We show that by taking advantage of depolarisation noise in quantum circuits for classification, a robustness bound against adversaries can be derived.
This is the first quantum protocol that can be used against the most general adversaries.
arXiv Detail & Related papers (2020-03-20T17:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.