Enhancing Robustness Against Adversarial Examples in Network Intrusion
Detection Systems
- URL: http://arxiv.org/abs/2008.03677v1
- Date: Sun, 9 Aug 2020 07:04:06 GMT
- Title: Enhancing Robustness Against Adversarial Examples in Network Intrusion
Detection Systems
- Authors: Mohammad J. Hashemi, Eric Keller
- Abstract summary: RePO is a new mechanism to build an NIDS with the help of denoising autoencoders capable of detecting different types of network attacks in a low false alert setting.
Our evaluation shows denoising autoencoders can improve detection of malicious traffic by up to 29% in a normal setting and by up to 45% in an adversarial setting.
- Score: 1.7386735294534732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increase of cyber attacks in both the numbers and varieties in recent
years demands to build a more sophisticated network intrusion detection system
(NIDS). These NIDS perform better when they can monitor all the traffic
traversing through the network like when being deployed on a Software-Defined
Network (SDN). Because of the inability to detect zero-day attacks,
signature-based NIDS which were traditionally used for detecting malicious
traffic are beginning to get replaced by anomaly-based NIDS built on neural
networks. However, recently it has been shown that such NIDS have their own
drawback namely being vulnerable to the adversarial example attack. Moreover,
they were mostly evaluated on the old datasets which don't represent the
variety of attacks network systems might face these days. In this paper, we
present Reconstruction from Partial Observation (RePO) as a new mechanism to
build an NIDS with the help of denoising autoencoders capable of detecting
different types of network attacks in a low false alert setting with an
enhanced robustness against adversarial example attack. Our evaluation
conducted on a dataset with a variety of network attacks shows denoising
autoencoders can improve detection of malicious traffic by up to 29% in a
normal setting and by up to 45% in an adversarial setting compared to other
recently proposed anomaly detectors.
Related papers
- Problem space structural adversarial attacks for Network Intrusion Detection Systems based on Graph Neural Networks [8.629862888374243]
We propose the first formalization of adversarial attacks specifically tailored for GNN in network intrusion detection.
We outline and model the problem space constraints that attackers need to consider to carry out feasible structural attacks in real-world scenarios.
Our findings demonstrate the increased robustness of the models against classical feature-based adversarial attacks.
arXiv Detail & Related papers (2024-03-18T14:40:33Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - NetSentry: A Deep Learning Approach to Detecting Incipient Large-scale
Network Attacks [9.194664029847019]
We show how to use Machine Learning for Network Intrusion Detection (NID) in a principled way.
We propose NetSentry, perhaps the first of its kind NIDS that builds on Bi-ALSTM, an original ensemble of sequential neural models.
We demonstrate F1 score gains above 33% over the state-of-the-art, as well as up to 3 times higher rates of detecting attacks such as XSS and web bruteforce.
arXiv Detail & Related papers (2022-02-20T17:41:02Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack [46.79557381882643]
We present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack.
Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets.
TANTRA achieves an average success rate of 99.99% in network intrusion detection system evasion.
arXiv Detail & Related papers (2021-03-10T19:03:38Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Pelican: A Deep Residual Network for Network Intrusion Detection [7.562843347215287]
We propose a deep neural network, Pelican, that is built upon specially-designed residual blocks.
Pelican can achieve a high attack detection performance while keeping a much low false alarm rate.
arXiv Detail & Related papers (2020-01-19T05:07:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.