Experimental quantum adversarial learning with programmable
superconducting qubits
- URL: http://arxiv.org/abs/2204.01738v1
- Date: Mon, 4 Apr 2022 18:00:00 GMT
- Title: Experimental quantum adversarial learning with programmable
superconducting qubits
- Authors: Wenhui Ren, Weikang Li, Shibo Xu, Ke Wang, Wenjie Jiang, Feitong Jin,
Xuhao Zhu, Jiachen Chen, Zixuan Song, Pengfei Zhang, Hang Dong, Xu Zhang,
Jinfeng Deng, Yu Gao, Chuanyu Zhang, Yaozu Wu, Bing Zhang, Qiujiang Guo,
Hekang Li, Zhen Wang, Jacob Biamonte, Chao Song, Dong-Ling Deng, H. Wang
- Abstract summary: We show the first experimental demonstration of quantum adversarial learning with programmable superconducting qubits.
Our results reveal experimentally a crucial vulnerability aspect of quantum learning systems under adversarial scenarios.
- Score: 15.24718195264974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum computing promises to enhance machine learning and artificial
intelligence. Different quantum algorithms have been proposed to improve a wide
spectrum of machine learning tasks. Yet, recent theoretical works show that,
similar to traditional classifiers based on deep classical neural networks,
quantum classifiers would suffer from the vulnerability problem: adding tiny
carefully-crafted perturbations to the legitimate original data samples would
facilitate incorrect predictions at a notably high confidence level. This will
pose serious problems for future quantum machine learning applications in
safety and security-critical scenarios. Here, we report the first experimental
demonstration of quantum adversarial learning with programmable superconducting
qubits. We train quantum classifiers, which are built upon variational quantum
circuits consisting of ten transmon qubits featuring average lifetimes of 150
$\mu$s, and average fidelities of simultaneous single- and two-qubit gates
above 99.94% and 99.4% respectively, with both real-life images (e.g., medical
magnetic resonance imaging scans) and quantum data. We demonstrate that these
well-trained classifiers (with testing accuracy up to 99%) can be practically
deceived by small adversarial perturbations, whereas an adversarial training
process would significantly enhance their robustness to such perturbations. Our
results reveal experimentally a crucial vulnerability aspect of quantum
learning systems under adversarial scenarios and demonstrate an effective
defense strategy against adversarial attacks, which provide a valuable guide
for quantum artificial intelligence applications with both near-term and future
quantum devices.
Related papers
- Quantum continual learning on a programmable superconducting processor [17.787742382926137]
We show that a quantum classifier can incrementally learn and retain knowledge across three distinct tasks.
Our results establish a viable strategy for empowering quantum learning systems with desirable adaptability to multiple sequential tasks.
arXiv Detail & Related papers (2024-09-15T13:16:56Z) - The curse of random quantum data [62.24825255497622]
We quantify the performances of quantum machine learning in the landscape of quantum data.
We find that the training efficiency and generalization capabilities in quantum machine learning will be exponentially suppressed with the increase in qubits.
Our findings apply to both the quantum kernel method and the large-width limit of quantum neural networks.
arXiv Detail & Related papers (2024-08-19T12:18:07Z) - A Quantum-Classical Collaborative Training Architecture Based on Quantum
State Fidelity [50.387179833629254]
We introduce a collaborative classical-quantum architecture called co-TenQu.
Co-TenQu enhances a classical deep neural network by up to 41.72% in a fair setting.
It outperforms other quantum-based methods by up to 1.9 times and achieves similar accuracy while utilizing 70.59% fewer qubits.
arXiv Detail & Related papers (2024-02-23T14:09:41Z) - Quantum Imitation Learning [74.15588381240795]
We propose quantum imitation learning (QIL) with a hope to utilize quantum advantage to speed up IL.
We develop two QIL algorithms, quantum behavioural cloning (Q-BC) and quantum generative adversarial imitation learning (Q-GAIL)
Experiment results demonstrate that both Q-BC and Q-GAIL can achieve comparable performance compared to classical counterparts.
arXiv Detail & Related papers (2023-04-04T12:47:35Z) - Enhancing Quantum Adversarial Robustness by Randomized Encodings [10.059889429655582]
We propose a scheme to protect quantum learning systems from adversarial attacks by randomly encoding the legitimate data samples.
We prove that both global and local random unitary encoders lead to exponentially vanishing gradients.
We show that random black-box quantum error correction encoders can protect quantum classifiers against local adversarial noises.
arXiv Detail & Related papers (2022-12-05T19:00:08Z) - Certified Robustness of Quantum Classifiers against Adversarial Examples
through Quantum Noise [68.1992787416233]
We show that adding quantum random rotation noise can improve robustness in quantum classifiers against adversarial attacks.
We derive a certified robustness bound to enable quantum classifiers to defend against adversarial examples.
arXiv Detail & Related papers (2022-11-02T05:17:04Z) - Universal Adversarial Examples and Perturbations for Quantum Classifiers [0.0]
We study the universality of adversarial examples and perturbations for quantum classifiers.
We prove that for a set of $k$ classifiers with each receiving input data of $n$ qubits, an $O(frac k 2n)$ increase of the perturbation strength is enough to ensure a moderate universal adversarial risk.
arXiv Detail & Related papers (2021-02-15T19:00:09Z) - Information Scrambling in Computationally Complex Quantum Circuits [56.22772134614514]
We experimentally investigate the dynamics of quantum scrambling on a 53-qubit quantum processor.
We show that while operator spreading is captured by an efficient classical model, operator entanglement requires exponentially scaled computational resources to simulate.
arXiv Detail & Related papers (2021-01-21T22:18:49Z) - Quantum noise protects quantum classifiers against adversaries [120.08771960032033]
Noise in quantum information processing is often viewed as a disruptive and difficult-to-avoid feature, especially in near-term quantum technologies.
We show that by taking advantage of depolarisation noise in quantum circuits for classification, a robustness bound against adversaries can be derived.
This is the first quantum protocol that can be used against the most general adversaries.
arXiv Detail & Related papers (2020-03-20T17:56:14Z) - Quantum Adversarial Machine Learning [0.0]
Adrial machine learning is an emerging field that focuses on studying vulnerabilities of machine learning approaches in adversarial settings.
In this paper, we explore different adversarial scenarios in the context of quantum machine learning.
We find that a quantum classifier that achieves nearly the state-of-the-art accuracy can be conclusively deceived by adversarial examples.
arXiv Detail & Related papers (2019-12-31T19:00:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.