Classical Autoencoder Distillation of Quantum Adversarial Manipulations
- URL: http://arxiv.org/abs/2504.09216v1
- Date: Sat, 12 Apr 2025 13:51:08 GMT
- Title: Classical Autoencoder Distillation of Quantum Adversarial Manipulations
- Authors: Amena Khatun, Muhammad Usman,
- Abstract summary: We report a new technique for the distillation of quantum manipulated image datasets by using classical autoencoders.<n>Our work highlights a promising pathway to achieve fully robust quantum machine learning in both classical and quantum adversarial scenarios.
- Score: 1.4598877063396687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum neural networks have been proven robust against classical adversarial attacks, but their vulnerability against quantum adversarial attacks is still a challenging problem. Here we report a new technique for the distillation of quantum manipulated image datasets by using classical autoencoders. Our technique recovers quantum classifier accuracies when tested under standard machine learning benchmarks utilising MNIST and FMNIST image datasets, and PGD and FGSM adversarial attack settings. Our work highlights a promising pathway to achieve fully robust quantum machine learning in both classical and quantum adversarial scenarios.
Related papers
- Realizing Quantum Adversarial Defense on a Trapped-ion Quantum Processor [3.1858340237924776]
We implement a data re-uploading-based quantum classifier on an ion-trap quantum processor.<n>We demonstrate its superior robustness on the MNIST dataset.
arXiv Detail & Related papers (2025-03-04T09:22:59Z) - A Quantum-Classical Collaborative Training Architecture Based on Quantum
State Fidelity [50.387179833629254]
We introduce a collaborative classical-quantum architecture called co-TenQu.
Co-TenQu enhances a classical deep neural network by up to 41.72% in a fair setting.
It outperforms other quantum-based methods by up to 1.9 times and achieves similar accuracy while utilizing 70.59% fewer qubits.
arXiv Detail & Related papers (2024-02-23T14:09:41Z) - Hybrid quantum transfer learning for crack image classification on NISQ
hardware [62.997667081978825]
We present an application of quantum transfer learning for detecting cracks in gray value images.
We compare the performance and training time of PennyLane's standard qubits with IBM's qasm_simulator and real backends.
arXiv Detail & Related papers (2023-07-31T14:45:29Z) - Enhancing Quantum Adversarial Robustness by Randomized Encodings [10.059889429655582]
We propose a scheme to protect quantum learning systems from adversarial attacks by randomly encoding the legitimate data samples.
We prove that both global and local random unitary encoders lead to exponentially vanishing gradients.
We show that random black-box quantum error correction encoders can protect quantum classifiers against local adversarial noises.
arXiv Detail & Related papers (2022-12-05T19:00:08Z) - Benchmarking Adversarially Robust Quantum Machine Learning at Scale [20.76790069530767]
We benchmark the robustness of quantum ML networks at scale by performing rigorous training for both simple and complex image datasets.
Our results show that QVCs offer a notably enhanced robustness against classical adversarial attacks.
By combining quantum and classical network outcomes, we propose a novel adversarial attack detection technology.
arXiv Detail & Related papers (2022-11-23T03:26:16Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Certified Robustness of Quantum Classifiers against Adversarial Examples
through Quantum Noise [68.1992787416233]
We show that adding quantum random rotation noise can improve robustness in quantum classifiers against adversarial attacks.
We derive a certified robustness bound to enable quantum classifiers to defend against adversarial examples.
arXiv Detail & Related papers (2022-11-02T05:17:04Z) - Experimental quantum adversarial learning with programmable
superconducting qubits [15.24718195264974]
We show the first experimental demonstration of quantum adversarial learning with programmable superconducting qubits.
Our results reveal experimentally a crucial vulnerability aspect of quantum learning systems under adversarial scenarios.
arXiv Detail & Related papers (2022-04-04T18:00:00Z) - Quantum Deformed Neural Networks [83.71196337378022]
We develop a new quantum neural network layer designed to run efficiently on a quantum computer.
It can be simulated on a classical computer when restricted in the way it entangles input states.
arXiv Detail & Related papers (2020-10-21T09:46:12Z) - Quantum noise protects quantum classifiers against adversaries [120.08771960032033]
Noise in quantum information processing is often viewed as a disruptive and difficult-to-avoid feature, especially in near-term quantum technologies.
We show that by taking advantage of depolarisation noise in quantum circuits for classification, a robustness bound against adversaries can be derived.
This is the first quantum protocol that can be used against the most general adversaries.
arXiv Detail & Related papers (2020-03-20T17:56:14Z) - Quantum Adversarial Machine Learning [0.0]
Adrial machine learning is an emerging field that focuses on studying vulnerabilities of machine learning approaches in adversarial settings.
In this paper, we explore different adversarial scenarios in the context of quantum machine learning.
We find that a quantum classifier that achieves nearly the state-of-the-art accuracy can be conclusively deceived by adversarial examples.
arXiv Detail & Related papers (2019-12-31T19:00:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.