Enhancing Quantum Adversarial Robustness by Randomized Encodings
- URL: http://arxiv.org/abs/2212.02531v1
- Date: Mon, 5 Dec 2022 19:00:08 GMT
- Title: Enhancing Quantum Adversarial Robustness by Randomized Encodings
- Authors: Weiyuan Gong, Dong Yuan, Weikang Li and Dong-Ling Deng
- Abstract summary: We propose a scheme to protect quantum learning systems from adversarial attacks by randomly encoding the legitimate data samples.
We prove that both global and local random unitary encoders lead to exponentially vanishing gradients.
We show that random black-box quantum error correction encoders can protect quantum classifiers against local adversarial noises.
- Score: 10.059889429655582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The interplay between quantum physics and machine learning gives rise to the
emergent frontier of quantum machine learning, where advanced quantum learning
models may outperform their classical counterparts in solving certain
challenging problems. However, quantum learning systems are vulnerable to
adversarial attacks: adding tiny carefully-crafted perturbations on legitimate
input samples can cause misclassifications. To address this issue, we propose a
general scheme to protect quantum learning systems from adversarial attacks by
randomly encoding the legitimate data samples through unitary or quantum error
correction encoders. In particular, we rigorously prove that both global and
local random unitary encoders lead to exponentially vanishing gradients (i.e.
barren plateaus) for any variational quantum circuits that aim to add
adversarial perturbations, independent of the input data and the inner
structures of adversarial circuits and quantum classifiers. In addition, we
prove a rigorous bound on the vulnerability of quantum classifiers under local
unitary adversarial attacks. We show that random black-box quantum error
correction encoders can protect quantum classifiers against local adversarial
noises and their robustness increases as we concatenate error correction codes.
To quantify the robustness enhancement, we adapt quantum differential privacy
as a measure of the prediction stability for quantum classifiers. Our results
establish versatile defense strategies for quantum classifiers against
adversarial perturbations, which provide valuable guidance to enhance the
reliability and security for both near-term and future quantum learning
technologies.
Related papers
- The curse of random quantum data [62.24825255497622]
We quantify the performances of quantum machine learning in the landscape of quantum data.
We find that the training efficiency and generalization capabilities in quantum machine learning will be exponentially suppressed with the increase in qubits.
Our findings apply to both the quantum kernel method and the large-width limit of quantum neural networks.
arXiv Detail & Related papers (2024-08-19T12:18:07Z) - Adversarial Robustness Guarantees for Quantum Classifiers [0.4934360430803066]
We show that quantum properties of QML algorithms can confer fundamental protections against such attacks.
We leverage tools from many-body physics to identify the quantum sources of this protection.
arXiv Detail & Related papers (2024-05-16T18:00:01Z) - Near-Term Distributed Quantum Computation using Mean-Field Corrections
and Auxiliary Qubits [77.04894470683776]
We propose near-term distributed quantum computing that involve limited information transfer and conservative entanglement production.
We build upon these concepts to produce an approximate circuit-cutting technique for the fragmented pre-training of variational quantum algorithms.
arXiv Detail & Related papers (2023-09-11T18:00:00Z) - Universal adversarial perturbations for multiple classification tasks
with quantum classifiers [0.0]
Quantum adversarial machine learning studies the vulnerability of quantum learning systems against adversarial perturbations.
In this paper, we explore the quantum universal perturbations in the context of heterogeneous classification tasks.
We find that quantum classifiers that achieve almost state-of-the-art accuracy on two different classification tasks can be both conclusively deceived by one carefully-crafted universal perturbation.
arXiv Detail & Related papers (2023-06-21T02:02:41Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Certified Robustness of Quantum Classifiers against Adversarial Examples
through Quantum Noise [68.1992787416233]
We show that adding quantum random rotation noise can improve robustness in quantum classifiers against adversarial attacks.
We derive a certified robustness bound to enable quantum classifiers to defend against adversarial examples.
arXiv Detail & Related papers (2022-11-02T05:17:04Z) - Experimental quantum adversarial learning with programmable
superconducting qubits [15.24718195264974]
We show the first experimental demonstration of quantum adversarial learning with programmable superconducting qubits.
Our results reveal experimentally a crucial vulnerability aspect of quantum learning systems under adversarial scenarios.
arXiv Detail & Related papers (2022-04-04T18:00:00Z) - Quantum Error Correction with Quantum Autoencoders [0.0]
We show how quantum neural networks can be trained to learn optimal strategies for active detection and correction of errors.
We highlight that the denoising capabilities of quantum autoencoders are not limited to the protection of specific states but extend to the entire logical codespace.
arXiv Detail & Related papers (2022-02-01T16:55:14Z) - Circuit Symmetry Verification Mitigates Quantum-Domain Impairments [69.33243249411113]
We propose circuit-oriented symmetry verification that are capable of verifying the commutativity of quantum circuits without the knowledge of the quantum state.
In particular, we propose the Fourier-temporal stabilizer (STS) technique, which generalizes the conventional quantum-domain formalism to circuit-oriented stabilizers.
arXiv Detail & Related papers (2021-12-27T21:15:35Z) - Universal Adversarial Examples and Perturbations for Quantum Classifiers [0.0]
We study the universality of adversarial examples and perturbations for quantum classifiers.
We prove that for a set of $k$ classifiers with each receiving input data of $n$ qubits, an $O(frac k 2n)$ increase of the perturbation strength is enough to ensure a moderate universal adversarial risk.
arXiv Detail & Related papers (2021-02-15T19:00:09Z) - Quantum noise protects quantum classifiers against adversaries [120.08771960032033]
Noise in quantum information processing is often viewed as a disruptive and difficult-to-avoid feature, especially in near-term quantum technologies.
We show that by taking advantage of depolarisation noise in quantum circuits for classification, a robustness bound against adversaries can be derived.
This is the first quantum protocol that can be used against the most general adversaries.
arXiv Detail & Related papers (2020-03-20T17:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.