Exploring the Vulnerabilities of Machine Learning and Quantum Machine
Learning to Adversarial Attacks using a Malware Dataset: A Comparative
Analysis
- URL: http://arxiv.org/abs/2305.19593v1
- Date: Wed, 31 May 2023 06:31:42 GMT
- Title: Exploring the Vulnerabilities of Machine Learning and Quantum Machine
Learning to Adversarial Attacks using a Malware Dataset: A Comparative
Analysis
- Authors: Mst Shapna Akter, Hossain Shahriar, Iysa Iqbal, MD Hossain, M.A.
Karim, Victor Clincy, Razvan Voicu
- Abstract summary: Machine learning (ML) and quantum machine learning (QML) have shown remarkable potential in tackling complex problems.
Their susceptibility to adversarial attacks raises concerns when deploying these systems in security sensitive applications.
We present a comparative analysis of the vulnerability of ML and QNN models to adversarial attacks using a malware dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The burgeoning fields of machine learning (ML) and quantum machine learning
(QML) have shown remarkable potential in tackling complex problems across
various domains. However, their susceptibility to adversarial attacks raises
concerns when deploying these systems in security sensitive applications. In
this study, we present a comparative analysis of the vulnerability of ML and
QML models, specifically conventional neural networks (NN) and quantum neural
networks (QNN), to adversarial attacks using a malware dataset. We utilize a
software supply chain attack dataset known as ClaMP and develop two distinct
models for QNN and NN, employing Pennylane for quantum implementations and
TensorFlow and Keras for traditional implementations. Our methodology involves
crafting adversarial samples by introducing random noise to a small portion of
the dataset and evaluating the impact on the models performance using accuracy,
precision, recall, and F1 score metrics. Based on our observations, both ML and
QML models exhibit vulnerability to adversarial attacks. While the QNNs
accuracy decreases more significantly compared to the NN after the attack, it
demonstrates better performance in terms of precision and recall, indicating
higher resilience in detecting true positives under adversarial conditions. We
also find that adversarial samples crafted for one model type can impair the
performance of the other, highlighting the need for robust defense mechanisms.
Our study serves as a foundation for future research focused on enhancing the
security and resilience of ML and QML models, particularly QNN, given its
recent advancements. A more extensive range of experiments will be conducted to
better understand the performance and robustness of both models in the face of
adversarial attacks.
Related papers
- Exploring the Robustness and Transferability of Patch-Based Adversarial Attacks in Quantized Neural Networks [3.962831477787584]
Quantized neural networks (QNNs) are increasingly used for efficient deployment of deep learning models on resource-constrained platforms.
While quantization reduces model size and computational demands, its impact on adversarial robustness remains inadequately addressed.
Patch-based attacks, characterized by localized, high-visibility perturbations, pose significant security risks due to their transferability and resilience.
arXiv Detail & Related papers (2024-11-22T07:05:35Z) - Adversarial Poisoning Attack on Quantum Machine Learning Models [2.348041867134616]
We introduce a quantum indiscriminate data poisoning attack, QUID.
QUID achieves up to $92%$ accuracy degradation in model performance compared to baseline models.
We also tested QUID against state-of-the-art classical defenses, with accuracy degradation still exceeding $50%$.
arXiv Detail & Related papers (2024-11-21T18:46:45Z) - Computable Model-Independent Bounds for Adversarial Quantum Machine Learning [4.857505043608425]
We introduce the first of an approximate lower bound for adversarial error when evaluating model resilience against quantum-based adversarial attacks.
In the best case, the experimental error is only 10% above the estimated bound, offering evidence of the inherent robustness of quantum models.
arXiv Detail & Related papers (2024-11-11T10:56:31Z) - Evaluation of machine learning architectures on the quantification of
epistemic and aleatoric uncertainties in complex dynamical systems [0.0]
Uncertainty Quantification (UQ) is a self assessed estimate of the model error.
We examine several machine learning techniques, including both Gaussian processes and a family UQ-augmented neural networks.
We evaluate UQ accuracy (distinct from model accuracy) using two metrics: the distribution of normalized residuals on validation data, and the distribution of estimated uncertainties.
arXiv Detail & Related papers (2023-06-27T02:35:25Z) - Software Supply Chain Vulnerabilities Detection in Source Code:
Performance Comparison between Traditional and Quantum Machine Learning
Algorithms [9.82923372621617]
SSC attacks lead to vulnerabilities in software products targeting downstream customers and even involved stakeholders.
In this paper, we conduct a comparative analysis between quantum neural networks (QNN) and conventional neural networks (NN) with a software supply chain attack dataset known as ClaMP.
Our goal is to distinguish the performance between QNN and NN and to conduct the experiment, we develop two different models for QNN and NN by utilizing Pennylane for quantum and Keras for traditional respectively.
arXiv Detail & Related papers (2023-05-31T06:06:28Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - The dilemma of quantum neural networks [63.82713636522488]
We show that quantum neural networks (QNNs) fail to provide any benefit over classical learning models.
QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets.
These results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
arXiv Detail & Related papers (2021-06-09T10:41:47Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Defence against adversarial attacks using classical and quantum-enhanced
Boltzmann machines [64.62510681492994]
generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations.
We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset.
arXiv Detail & Related papers (2020-12-21T19:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.