AdvQuNN: A Methodology for Analyzing the Adversarial Robustness of Quanvolutional Neural Networks
- URL: http://arxiv.org/abs/2403.05596v2
- Date: Fri, 21 Jun 2024 18:31:47 GMT
- Title: AdvQuNN: A Methodology for Analyzing the Adversarial Robustness of Quanvolutional Neural Networks
- Authors: Walid El Maouaki, Alberto Marchisio, Taoufik Said, Mohamed Bennai, Muhammad Shafique,
- Abstract summary: This study aims to rigorously assess the influence of quantum circuit architecture on the resilience of QuNN models.
Our results show that, compared to classical convolutional networks, QuNNs achieve up to 60% higher robustness for the MNIST and 40% for FMNIST datasets.
- Score: 3.9554540293311864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in quantum computing have led to the development of hybrid quantum neural networks (HQNNs) that employ a mixed set of quantum layers and classical layers, such as Quanvolutional Neural Networks (QuNNs). While several works have shown security threats of classical neural networks, such as adversarial attacks, their impact on QuNNs is still relatively unexplored. This work tackles this problem by designing AdvQuNN, a specialized methodology to investigate the robustness of HQNNs like QuNNs against adversarial attacks. It employs different types of Ansatzes as parametrized quantum circuits and different types of adversarial attacks. This study aims to rigorously assess the influence of quantum circuit architecture on the resilience of QuNN models, which opens up new pathways for enhancing the robustness of QuNNs and advancing the field of quantum cybersecurity. Our results show that, compared to classical convolutional networks, QuNNs achieve up to 60\% higher robustness for the MNIST and 40\% for FMNIST datasets.
Related papers
- Designing Robust Quantum Neural Networks: Exploring Expressibility, Entanglement, and Control Rotation Gate Selection for Enhanced Quantum Models [3.9554540293311864]
This study investigates the robustness of Quanvolutional Neural Networks (QuNNs) in comparison to their classical counterparts.
We develop a novel methodology that utilizes three quantum circuit metrics: expressibility, entanglement capability, and controlled rotation gate selection.
Our results demonstrate that QuNNs exhibit up to 60% greater robustness on the MNIST dataset and 40% on the Fashion-MNIST dataset compared to CNNs.
arXiv Detail & Related papers (2024-11-03T21:18:07Z) - CTRQNets & LQNets: Continuous Time Recurrent and Liquid Quantum Neural Networks [76.53016529061821]
Liquid Quantum Neural Network (LQNet) and Continuous Time Recurrent Quantum Neural Network (CTRQNet) developed.
LQNet and CTRQNet achieve accuracy increases as high as 40% on CIFAR 10 through binary classification.
arXiv Detail & Related papers (2024-08-28T00:56:03Z) - From Graphs to Qubits: A Critical Review of Quantum Graph Neural Networks [56.51893966016221]
Quantum Graph Neural Networks (QGNNs) represent a novel fusion of quantum computing and Graph Neural Networks (GNNs)
This paper critically reviews the state-of-the-art in QGNNs, exploring various architectures.
We discuss their applications across diverse fields such as high-energy physics, molecular chemistry, finance and earth sciences, highlighting the potential for quantum advantage.
arXiv Detail & Related papers (2024-08-12T22:53:14Z) - RobQuNNs: A Methodology for Robust Quanvolutional Neural Networks against Adversarial Attacks [3.9554540293311864]
Quanvolutional Neural Networks (QuNNs) integrate quantum and classical layers.
This study introduces RobQuNN, a new methodology to enhance the robustness of QuNNs against adversarial attacks.
The findings reveal that QuNNs exhibit up to 60% higher robustness compared to classical networks for the MNIST dataset.
arXiv Detail & Related papers (2024-07-04T12:13:52Z) - Parallel Proportional Fusion of Spiking Quantum Neural Network for Optimizing Image Classification [10.069224006497162]
We introduce a novel architecture termed Parallel Proportional Fusion of Quantum and Spiking Neural Networks (PPF-QSNN)
The proposed PPF-QSNN outperforms both the existing spiking neural network and the serial quantum neural network across metrics such as accuracy, loss, and robustness.
This study lays the groundwork for the advancement and application of quantum advantage in artificial intelligent computations.
arXiv Detail & Related papers (2024-04-01T10:35:35Z) - Studying the Impact of Quantum-Specific Hyperparameters on Hybrid Quantum-Classical Neural Networks [4.951980887762045]
hybrid quantum-classical neural networks (HQNNs) represent a promising solution that combines the strengths of classical machine learning with quantum computing capabilities.
In this paper, we investigate the impact of these variations on different HQNN models for image classification tasks, implemented on the PennyLane framework.
We aim to uncover intuitive and counter-intuitive learning patterns of HQNN models within granular levels of controlled quantum perturbations, to form a sound basis for their correlation to accuracy and training time.
arXiv Detail & Related papers (2024-02-16T11:44:25Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - ResQNets: A Residual Approach for Mitigating Barren Plateaus in Quantum
Neural Networks [0.0]
The barren plateau problem in quantum neural networks (QNNs) is a significant challenge that hinders the practical success of QNNs.
In this paper, we introduce residual quantum neural networks (ResQNets) as a solution to address this problem.
arXiv Detail & Related papers (2023-05-05T13:33:43Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - The dilemma of quantum neural networks [63.82713636522488]
We show that quantum neural networks (QNNs) fail to provide any benefit over classical learning models.
QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets.
These results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
arXiv Detail & Related papers (2021-06-09T10:41:47Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.