Universal adversarial perturbations for multiple classification tasks
with quantum classifiers
- URL: http://arxiv.org/abs/2306.11974v3
- Date: Wed, 25 Oct 2023 09:09:48 GMT
- Title: Universal adversarial perturbations for multiple classification tasks
with quantum classifiers
- Authors: Yun-Zhong Qiu
- Abstract summary: Quantum adversarial machine learning studies the vulnerability of quantum learning systems against adversarial perturbations.
In this paper, we explore the quantum universal perturbations in the context of heterogeneous classification tasks.
We find that quantum classifiers that achieve almost state-of-the-art accuracy on two different classification tasks can be both conclusively deceived by one carefully-crafted universal perturbation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum adversarial machine learning is an emerging field that studies the
vulnerability of quantum learning systems against adversarial perturbations and
develops possible defense strategies. Quantum universal adversarial
perturbations are small perturbations, which can make different input samples
into adversarial examples that may deceive a given quantum classifier. This is
a field that was rarely looked into but worthwhile investigating because
universal perturbations might simplify malicious attacks to a large extent,
causing unexpected devastation to quantum machine learning models. In this
paper, we take a step forward and explore the quantum universal perturbations
in the context of heterogeneous classification tasks. In particular, we find
that quantum classifiers that achieve almost state-of-the-art accuracy on two
different classification tasks can be both conclusively deceived by one
carefully-crafted universal perturbation. This result is explicitly
demonstrated with well-designed quantum continual learning models with elastic
weight consolidation method to avoid catastrophic forgetting, as well as
real-life heterogeneous datasets from hand-written digits and medical MRI
images. Our results provide a simple and efficient way to generate universal
perturbations on heterogeneous classification tasks and thus would provide
valuable guidance for future quantum learning technologies.
Related papers
- Observation of disorder-free localization and efficient disorder averaging on a quantum processor [117.33878347943316]
We implement an efficient procedure on a quantum processor, leveraging quantum parallelism, to efficiently sample over all disorder realizations.
We observe localization without disorder in quantum many-body dynamics in one and two dimensions.
arXiv Detail & Related papers (2024-10-09T05:28:14Z) - The curse of random quantum data [62.24825255497622]
We quantify the performances of quantum machine learning in the landscape of quantum data.
We find that the training efficiency and generalization capabilities in quantum machine learning will be exponentially suppressed with the increase in qubits.
Our findings apply to both the quantum kernel method and the large-width limit of quantum neural networks.
arXiv Detail & Related papers (2024-08-19T12:18:07Z) - Quantum Adversarial Learning for Kernel Methods [0.0]
We show that hybrid quantum classifiers based on quantum kernel methods and support vector machines are vulnerable to adversarial attacks.
Simple defence strategies based on data augmentation with a few crafted perturbations can make the classifier robust against new attacks.
arXiv Detail & Related papers (2024-04-08T19:23:17Z) - Quantum algorithms: A survey of applications and end-to-end complexities [90.05272647148196]
The anticipated applications of quantum computers span across science and industry.
We present a survey of several potential application areas of quantum algorithms.
We outline the challenges and opportunities in each area in an "end-to-end" fashion.
arXiv Detail & Related papers (2023-10-04T17:53:55Z) - Enhancing Quantum Adversarial Robustness by Randomized Encodings [10.059889429655582]
We propose a scheme to protect quantum learning systems from adversarial attacks by randomly encoding the legitimate data samples.
We prove that both global and local random unitary encoders lead to exponentially vanishing gradients.
We show that random black-box quantum error correction encoders can protect quantum classifiers against local adversarial noises.
arXiv Detail & Related papers (2022-12-05T19:00:08Z) - Certified Robustness of Quantum Classifiers against Adversarial Examples
through Quantum Noise [68.1992787416233]
We show that adding quantum random rotation noise can improve robustness in quantum classifiers against adversarial attacks.
We derive a certified robustness bound to enable quantum classifiers to defend against adversarial examples.
arXiv Detail & Related papers (2022-11-02T05:17:04Z) - Experimental quantum adversarial learning with programmable
superconducting qubits [15.24718195264974]
We show the first experimental demonstration of quantum adversarial learning with programmable superconducting qubits.
Our results reveal experimentally a crucial vulnerability aspect of quantum learning systems under adversarial scenarios.
arXiv Detail & Related papers (2022-04-04T18:00:00Z) - Sensing quantum chaos through the non-unitary geometric phase [62.997667081978825]
We propose a decoherent mechanism for sensing quantum chaos.
The chaotic nature of a many-body quantum system is sensed by studying the implications that the system produces in the long-time dynamics of a probe coupled to it.
arXiv Detail & Related papers (2021-04-13T17:24:08Z) - Universal Adversarial Examples and Perturbations for Quantum Classifiers [0.0]
We study the universality of adversarial examples and perturbations for quantum classifiers.
We prove that for a set of $k$ classifiers with each receiving input data of $n$ qubits, an $O(frac k 2n)$ increase of the perturbation strength is enough to ensure a moderate universal adversarial risk.
arXiv Detail & Related papers (2021-02-15T19:00:09Z) - Quantum noise protects quantum classifiers against adversaries [120.08771960032033]
Noise in quantum information processing is often viewed as a disruptive and difficult-to-avoid feature, especially in near-term quantum technologies.
We show that by taking advantage of depolarisation noise in quantum circuits for classification, a robustness bound against adversaries can be derived.
This is the first quantum protocol that can be used against the most general adversaries.
arXiv Detail & Related papers (2020-03-20T17:56:14Z) - Quantum Adversarial Machine Learning [0.0]
Adrial machine learning is an emerging field that focuses on studying vulnerabilities of machine learning approaches in adversarial settings.
In this paper, we explore different adversarial scenarios in the context of quantum machine learning.
We find that a quantum classifier that achieves nearly the state-of-the-art accuracy can be conclusively deceived by adversarial examples.
arXiv Detail & Related papers (2019-12-31T19:00:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.