Universal Adversarial Examples and Perturbations for Quantum Classifiers
- URL: http://arxiv.org/abs/2102.07788v1
- Date: Mon, 15 Feb 2021 19:00:09 GMT
- Title: Universal Adversarial Examples and Perturbations for Quantum Classifiers
- Authors: Weiyuan Gong and Dong-Ling Deng
- Abstract summary: We study the universality of adversarial examples and perturbations for quantum classifiers.
We prove that for a set of $k$ classifiers with each receiving input data of $n$ qubits, an $O(frac k 2n)$ increase of the perturbation strength is enough to ensure a moderate universal adversarial risk.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum machine learning explores the interplay between machine learning and
quantum physics, which may lead to unprecedented perspectives for both fields.
In fact, recent works have shown strong evidences that quantum computers could
outperform classical computers in solving certain notable machine learning
tasks. Yet, quantum learning systems may also suffer from the vulnerability
problem: adding a tiny carefully-crafted perturbation to the legitimate input
data would cause the systems to make incorrect predictions at a notably high
confidence level. In this paper, we study the universality of adversarial
examples and perturbations for quantum classifiers. Through concrete examples
involving classifications of real-life images and quantum phases of matter, we
show that there exist universal adversarial examples that can fool a set of
different quantum classifiers. We prove that for a set of $k$ classifiers with
each receiving input data of $n$ qubits, an $O(\frac{\ln k} {2^n})$ increase of
the perturbation strength is enough to ensure a moderate universal adversarial
risk. In addition, for a given quantum classifier we show that there exist
universal adversarial perturbations, which can be added to different legitimate
samples and make them to be adversarial examples for the classifier. Our
results reveal the universality perspective of adversarial attacks for quantum
machine learning systems, which would be crucial for practical applications of
both near-term and future quantum technologies in solving machine learning
problems.
Related papers
- The curse of random quantum data [62.24825255497622]
We quantify the performances of quantum machine learning in the landscape of quantum data.
We find that the training efficiency and generalization capabilities in quantum machine learning will be exponentially suppressed with the increase in qubits.
Our findings apply to both the quantum kernel method and the large-width limit of quantum neural networks.
arXiv Detail & Related papers (2024-08-19T12:18:07Z) - Universal adversarial perturbations for multiple classification tasks
with quantum classifiers [0.0]
Quantum adversarial machine learning studies the vulnerability of quantum learning systems against adversarial perturbations.
In this paper, we explore the quantum universal perturbations in the context of heterogeneous classification tasks.
We find that quantum classifiers that achieve almost state-of-the-art accuracy on two different classification tasks can be both conclusively deceived by one carefully-crafted universal perturbation.
arXiv Detail & Related papers (2023-06-21T02:02:41Z) - Classical Verification of Quantum Learning [42.362388367152256]
We develop a framework for classical verification of quantum learning.
We propose a new quantum data access model that we call "mixture-of-superpositions" quantum examples.
Our results demonstrate that the potential power of quantum data for learning tasks, while not unlimited, can be utilized by classical agents.
arXiv Detail & Related papers (2023-06-08T00:31:27Z) - Enhancing Quantum Adversarial Robustness by Randomized Encodings [10.059889429655582]
We propose a scheme to protect quantum learning systems from adversarial attacks by randomly encoding the legitimate data samples.
We prove that both global and local random unitary encoders lead to exponentially vanishing gradients.
We show that random black-box quantum error correction encoders can protect quantum classifiers against local adversarial noises.
arXiv Detail & Related papers (2022-12-05T19:00:08Z) - Certified Robustness of Quantum Classifiers against Adversarial Examples
through Quantum Noise [68.1992787416233]
We show that adding quantum random rotation noise can improve robustness in quantum classifiers against adversarial attacks.
We derive a certified robustness bound to enable quantum classifiers to defend against adversarial examples.
arXiv Detail & Related papers (2022-11-02T05:17:04Z) - Noisy Quantum Kernel Machines [58.09028887465797]
An emerging class of quantum learning machines is that based on the paradigm of quantum kernels.
We study how dissipation and decoherence affect their performance.
We show that decoherence and dissipation can be seen as an implicit regularization for the quantum kernel machines.
arXiv Detail & Related papers (2022-04-26T09:52:02Z) - Experimental quantum adversarial learning with programmable
superconducting qubits [15.24718195264974]
We show the first experimental demonstration of quantum adversarial learning with programmable superconducting qubits.
Our results reveal experimentally a crucial vulnerability aspect of quantum learning systems under adversarial scenarios.
arXiv Detail & Related papers (2022-04-04T18:00:00Z) - Experimental Quantum Generative Adversarial Networks for Image
Generation [93.06926114985761]
We experimentally achieve the learning and generation of real-world hand-written digit images on a superconducting quantum processor.
Our work provides guidance for developing advanced quantum generative models on near-term quantum devices.
arXiv Detail & Related papers (2020-10-13T06:57:17Z) - Quantum noise protects quantum classifiers against adversaries [120.08771960032033]
Noise in quantum information processing is often viewed as a disruptive and difficult-to-avoid feature, especially in near-term quantum technologies.
We show that by taking advantage of depolarisation noise in quantum circuits for classification, a robustness bound against adversaries can be derived.
This is the first quantum protocol that can be used against the most general adversaries.
arXiv Detail & Related papers (2020-03-20T17:56:14Z) - Machine learning transfer efficiencies for noisy quantum walks [62.997667081978825]
We show that the process of finding requirements on both a graph type and a quantum system coherence can be automated.
The automation is done by using a convolutional neural network of a particular type that learns to understand with which network and under which coherence requirements quantum advantage is possible.
Our results are of importance for demonstration of advantage in quantum experiments and pave the way towards automating scientific research and discoveries.
arXiv Detail & Related papers (2020-01-15T18:36:53Z) - Quantum Adversarial Machine Learning [0.0]
Adrial machine learning is an emerging field that focuses on studying vulnerabilities of machine learning approaches in adversarial settings.
In this paper, we explore different adversarial scenarios in the context of quantum machine learning.
We find that a quantum classifier that achieves nearly the state-of-the-art accuracy can be conclusively deceived by adversarial examples.
arXiv Detail & Related papers (2019-12-31T19:00:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.