Demystifying the Transferability of Adversarial Attacks in Computer
Networks
- URL: http://arxiv.org/abs/2110.04488v1
- Date: Sat, 9 Oct 2021 07:20:44 GMT
- Title: Demystifying the Transferability of Adversarial Attacks in Computer
Networks
- Authors: Ehsan Nowroozi, Mauro Conti, Yassine Mekdad, Mohammad Hajian
Berenjestanaki, Abdeslam EL Fergougui
- Abstract summary: CNN-based models are subject to various adversarial attacks.
Some adversarial examples could potentially still be effective against different unknown models.
This paper assesses the robustness of CNN-based models against adversarial transferability.
- Score: 23.80086861061094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Convolutional Neural Networks (CNN) models are one of the most popular
networks in deep learning. With their large fields of application in different
areas, they are extensively used in both academia and industry. CNN-based
models include several exciting implementations such as early breast cancer
detection or detecting developmental delays in children (e.g., autism, speech
disorders, etc.). However, previous studies demonstrate that these models are
subject to various adversarial attacks. Interestingly, some adversarial
examples could potentially still be effective against different unknown models.
This particular property is known as adversarial transferability, and prior
works slightly analyzed this characteristic in a very limited application
domain. In this paper, we aim to demystify the transferability threats in
computer networks by studying the possibility of transferring adversarial
examples. In particular, we provide the first comprehensive study which
assesses the robustness of CNN-based models for computer networks against
adversarial transferability. In our experiments, we consider five different
attacks: (1) the Iterative Fast Gradient Method (I-FGSM), (2) the
Jacobian-based Saliency Map attack (JSMA), (3) the L-BFGS attack, (4) the
Projected Gradient Descent attack (PGD), and (5) the DeepFool attack. These
attacks are performed against two well-known datasets: the N-BaIoT dataset and
the Domain Generating Algorithms (DGA) dataset. Our results show that the
transferability happens in specific use cases where the adversary can easily
compromise the victim's network with very few knowledge of the targeted model.
Related papers
- Unscrambling the Rectification of Adversarial Attacks Transferability
across Computer Networks [4.576324217026666]
Convolutional neural networks (CNNs) models play a vital role in achieving state-of-the-art performances.
CNNs can be compromised because of their susceptibility to adversarial attacks.
We present a novel and comprehensive method to improve the strength of attacks and assess the transferability of adversarial examples in CNNs.
arXiv Detail & Related papers (2023-10-26T22:36:24Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Common Knowledge Learning for Generating Transferable Adversarial
Examples [60.1287733223249]
This paper focuses on an important type of black-box attacks, where the adversary generates adversarial examples by a substitute (source) model.
Existing methods tend to give unsatisfactory adversarial transferability when the source and target models are from different types of DNN architectures.
We propose a common knowledge learning (CKL) framework to learn better network weights to generate adversarial examples.
arXiv Detail & Related papers (2023-07-01T09:07:12Z) - Boosting Adversarial Transferability via Fusing Logits of Top-1
Decomposed Feature [36.78292952798531]
We propose a Singular Value Decomposition (SVD)-based feature-level attack method.
Our approach is inspired by the discovery that eigenvectors associated with the larger singular values from the middle layer features exhibit superior generalization and attention properties.
arXiv Detail & Related papers (2023-05-02T12:27:44Z) - Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial
Detection [22.99930028876662]
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks.
Current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system.
We propose a simple and light-weight detector, which leverages recent findings on the relation between networks' local intrinsic dimensionality (LID) and adversarial attacks.
arXiv Detail & Related papers (2022-12-13T17:51:32Z) - Transfer Attacks Revisited: A Large-Scale Empirical Study in Real
Computer Vision Settings [64.37621685052571]
We conduct the first systematic empirical study of transfer attacks against major cloud-based ML platforms.
The study leads to a number of interesting findings which are inconsistent to the existing ones.
We believe this work sheds light on the vulnerabilities of popular ML platforms and points to a few promising research directions.
arXiv Detail & Related papers (2022-04-07T12:16:24Z) - NetSentry: A Deep Learning Approach to Detecting Incipient Large-scale
Network Attacks [9.194664029847019]
We show how to use Machine Learning for Network Intrusion Detection (NID) in a principled way.
We propose NetSentry, perhaps the first of its kind NIDS that builds on Bi-ALSTM, an original ensemble of sequential neural models.
We demonstrate F1 score gains above 33% over the state-of-the-art, as well as up to 3 times higher rates of detecting attacks such as XSS and web bruteforce.
arXiv Detail & Related papers (2022-02-20T17:41:02Z) - Unveiling the potential of Graph Neural Networks for robust Intrusion
Detection [2.21481607673149]
We propose a novel Graph Neural Network (GNN) model to learn flow patterns of attacks structured as graphs.
Our model is able to maintain the same level of accuracy as in previous experiments, while state-of-the-art ML techniques degrade up to 50% their accuracy (F1-score) under adversarial attacks.
arXiv Detail & Related papers (2021-07-30T16:56:39Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.