Unscrambling the Rectification of Adversarial Attacks Transferability
across Computer Networks
- URL: http://arxiv.org/abs/2311.03373v1
- Date: Thu, 26 Oct 2023 22:36:24 GMT
- Title: Unscrambling the Rectification of Adversarial Attacks Transferability
across Computer Networks
- Authors: Ehsan Nowroozi, Samaneh Ghelichkhani, Imran Haider and Ali
Dehghantanha
- Abstract summary: Convolutional neural networks (CNNs) models play a vital role in achieving state-of-the-art performances.
CNNs can be compromised because of their susceptibility to adversarial attacks.
We present a novel and comprehensive method to improve the strength of attacks and assess the transferability of adversarial examples in CNNs.
- Score: 4.576324217026666
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional neural networks (CNNs) models play a vital role in achieving
state-of-the-art performances in various technological fields. CNNs are not
limited to Natural Language Processing (NLP) or Computer Vision (CV) but also
have substantial applications in other technological domains, particularly in
cybersecurity. The reliability of CNN's models can be compromised because of
their susceptibility to adversarial attacks, which can be generated
effortlessly, easily applied, and transferred in real-world scenarios.
In this paper, we present a novel and comprehensive method to improve the
strength of attacks and assess the transferability of adversarial examples in
CNNs when such strength changes, as well as whether the transferability
property issue exists in computer network applications. In the context of our
study, we initially examined six distinct modes of attack: the Carlini and
Wagner (C&W), Fast Gradient Sign Method (FGSM), Iterative Fast Gradient Sign
Method (I-FGSM), Jacobian-based Saliency Map (JSMA), Limited-memory Broyden
fletcher Goldfarb Shanno (L-BFGS), and Projected Gradient Descent (PGD) attack.
We applied these attack techniques on two popular datasets: the CIC and UNSW
datasets. The outcomes of our experiment demonstrate that an improvement in
transferability occurs in the targeted scenarios for FGSM, JSMA, LBFGS, and
other attacks. Our findings further indicate that the threats to security posed
by adversarial examples, even in computer network applications, necessitate the
development of novel defense mechanisms to enhance the security of DL-based
techniques.
Related papers
- Impact of White-Box Adversarial Attacks on Convolutional Neural Networks [0.6138671548064356]
We investigate the susceptibility of Convolutional Neural Networks (CNNs) to white-box adversarial attacks.
Our study provides insights into the robustness of CNNs against adversarial threats.
arXiv Detail & Related papers (2024-10-02T21:24:08Z) - A Dual-Tier Adaptive One-Class Classification IDS for Emerging Cyberthreats [3.560574387648533]
We propose a one-class classification-driven IDS system structured on two tiers.
The first tier distinguishes between normal activities and attacks/threats, while the second tier determines if the detected attack is known or unknown.
This model not only identifies unseen attacks but also uses them for retraining them by clustering unseen attacks.
arXiv Detail & Related papers (2024-03-17T12:26:30Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Untargeted White-box Adversarial Attack with Heuristic Defence Methods
in Real-time Deep Learning based Network Intrusion Detection System [0.0]
In Adversarial Machine Learning (AML), malicious actors aim to fool the Machine Learning (ML) and Deep Learning (DL) models to produce incorrect predictions.
AML is an emerging research domain, and it has become a necessity for the in-depth study of adversarial attacks.
We implement four powerful adversarial attack techniques, namely, Fast Gradient Sign Method (FGSM), Jacobian Saliency Map Attack (JSMA), Projected Gradient Descent (PGD) and Carlini & Wagner (C&W) in NIDS.
arXiv Detail & Related papers (2023-10-05T06:32:56Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Demystifying the Transferability of Adversarial Attacks in Computer
Networks [23.80086861061094]
CNN-based models are subject to various adversarial attacks.
Some adversarial examples could potentially still be effective against different unknown models.
This paper assesses the robustness of CNN-based models against adversarial transferability.
arXiv Detail & Related papers (2021-10-09T07:20:44Z) - Improving Neural Network Robustness through Neighborhood Preserving
Layers [0.751016548830037]
We demonstrate a novel neural network architecture which can incorporate such layers and also can be trained efficiently.
We empirically show that our designed network architecture is more robust against state-of-art gradient descent based attacks.
arXiv Detail & Related papers (2021-01-28T01:26:35Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Transferable Perturbations of Deep Feature Distributions [102.94094966908916]
This work presents a new adversarial attack based on the modeling and exploitation of class-wise and layer-wise deep feature distributions.
We achieve state-of-the-art targeted blackbox transfer-based attack results for undefended ImageNet models.
arXiv Detail & Related papers (2020-04-27T00:32:25Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.