Evaluating Resilience of Encrypted Traffic Classification Against
Adversarial Evasion Attacks
- URL: http://arxiv.org/abs/2105.14564v1
- Date: Sun, 30 May 2021 15:07:12 GMT
- Title: Evaluating Resilience of Encrypted Traffic Classification Against
Adversarial Evasion Attacks
- Authors: Ramy Maarouf, Danish Sattar, and Ashraf Matrawy
- Abstract summary: In this paper, we focus on investigating the effectiveness of different evasion attacks and see how resilient machine and deep learning algorithms are.
In most of our experimental results, deep learning shows better resilience against the adversarial samples in comparison to machine learning.
- Score: 4.243926243206826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning and deep learning algorithms can be used to classify
encrypted Internet traffic. Classification of encrypted traffic can become more
challenging in the presence of adversarial attacks that target the learning
algorithms. In this paper, we focus on investigating the effectiveness of
different evasion attacks and see how resilient machine and deep learning
algorithms are. Namely, we test C4.5 Decision Tree, K-Nearest Neighbor (KNN),
Artificial Neural Network (ANN), Convolutional Neural Networks (CNN) and
Recurrent Neural Networks (RNN). In most of our experimental results, deep
learning shows better resilience against the adversarial samples in comparison
to machine learning. Whereas, the impact of the attack varies depending on the
type of attack.
Related papers
- Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - A reading survey on adversarial machine learning: Adversarial attacks
and their understanding [6.1678491628787455]
Adversarial Machine Learning exploits and understands some of the vulnerabilities that cause the neural networks to misclassify for near original input.
A class of algorithms called adversarial attacks is proposed to make the neural networks misclassify for various tasks in different domains.
This article provides a survey of existing adversarial attacks and their understanding based on different perspectives.
arXiv Detail & Related papers (2023-08-07T07:37:26Z) - Backdoor Attack Detection in Computer Vision by Applying Matrix
Factorization on the Weights of Deep Networks [6.44397009982949]
We introduce a novel method for backdoor detection that extracts features from pre-trained DNN's weights.
In comparison to other detection techniques, this has a number of benefits, such as not requiring any training data.
Our method outperforms the competing algorithms in terms of efficiency and is more accurate, helping to ensure the safe application of deep learning and AI.
arXiv Detail & Related papers (2022-12-15T20:20:18Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Hardware Accelerator for Adversarial Attacks on Deep Learning Neural
Networks [7.20382137043754]
A class of adversarial attack network algorithms has been proposed to generate robust physical perturbations.
In this paper, we propose the first hardware accelerator for adversarial attacks based on memristor crossbar arrays.
arXiv Detail & Related papers (2020-08-03T21:55:41Z) - Evaluation of Adversarial Training on Different Types of Neural Networks
in Deep Learning-based IDSs [3.8073142980733]
We focus on investigating the effectiveness of different evasion attacks and how to train a resilience deep learning-based IDS.
We use the min-max approach to formulate the problem of training robust IDS against adversarial examples.
Our experiments on different deep learning algorithms and different benchmark datasets demonstrate that defense using an adversarial training-based min-max approach improves the robustness against the five well-known adversarial attack methods.
arXiv Detail & Related papers (2020-07-08T23:33:30Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - NAttack! Adversarial Attacks to bypass a GAN based classifier trained to
detect Network intrusion [0.3007949058551534]
Before the rise of machine learning, network anomalies which could imply an attack, were detected using well-crafted rules.
With the advancements of machine learning for network anomaly, it is not easy for a human to understand how to bypass a cyber-defence system.
In this paper, we show that even if we build a classifier and train it with adversarial examples for network data, we can use adversarial attacks and successfully break the system.
arXiv Detail & Related papers (2020-02-20T01:54:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.