TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack
- URL: http://arxiv.org/abs/2103.06297v1
- Date: Wed, 10 Mar 2021 19:03:38 GMT
- Title: TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack
- Authors: Yam Sharon and David Berend and Yang Liu and Asaf Shabtai and Yuval
Elovici
- Abstract summary: We present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack.
Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets.
TANTRA achieves an average success rate of 99.99% in network intrusion detection system evasion.
- Score: 46.79557381882643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Network intrusion attacks are a known threat. To detect such attacks, network
intrusion detection systems (NIDSs) have been developed and deployed. These
systems apply machine learning models to high-dimensional vectors of features
extracted from network traffic to detect intrusions. Advances in NIDSs have
made it challenging for attackers, who must execute attacks without being
detected by these systems. Prior research on bypassing NIDSs has mainly focused
on perturbing the features extracted from the attack traffic to fool the
detection system, however, this may jeopardize the attack's functionality. In
this work, we present TANTRA, a novel end-to-end Timing-based Adversarial
Network Traffic Reshaping Attack that can bypass a variety of NIDSs. Our
evasion attack utilizes a long short-term memory (LSTM) deep neural network
(DNN) which is trained to learn the time differences between the target
network's benign packets. The trained LSTM is used to set the time differences
between the malicious traffic packets (attack), without changing their content,
such that they will "behave" like benign network traffic and will not be
detected as an intrusion. We evaluate TANTRA on eight common intrusion attacks
and three state-of-the-art NIDS systems, achieving an average success rate of
99.99\% in network intrusion detection system evasion. We also propose a novel
mitigation technique to address this new evasion attack.
Related papers
- Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Zero-day DDoS Attack Detection [0.0]
This project aims to solve the task of detecting zero-day DDoS attacks by utilizing network traffic that is captured before entering a private network.
Modern feature extraction techniques are used in conjunction with neural networks to determine if a network packet is either benign or malicious.
arXiv Detail & Related papers (2022-08-31T17:14:43Z) - An anomaly detection approach for backdoored neural networks: face
recognition as a case study [77.92020418343022]
We propose a novel backdoored network detection method based on the principle of anomaly detection.
We test our method on a novel dataset of backdoored networks and report detectability results with perfect scores.
arXiv Detail & Related papers (2022-08-22T12:14:13Z) - Using EBGAN for Anomaly Intrusion Detection [13.155954231596434]
We propose an EBGAN-based intrusion detection method, IDS-EBGAN, that classifies network records as normal traffic or malicious traffic.
The generator in IDS-EBGAN is responsible for converting the original malicious network traffic in the training set into adversarial malicious examples.
During testing, IDS-EBGAN uses reconstruction error of discriminator to classify traffic records.
arXiv Detail & Related papers (2022-06-21T13:49:34Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Enhancing Robustness Against Adversarial Examples in Network Intrusion
Detection Systems [1.7386735294534732]
RePO is a new mechanism to build an NIDS with the help of denoising autoencoders capable of detecting different types of network attacks in a low false alert setting.
Our evaluation shows denoising autoencoders can improve detection of malicious traffic by up to 29% in a normal setting and by up to 45% in an adversarial setting.
arXiv Detail & Related papers (2020-08-09T07:04:06Z) - Pelican: A Deep Residual Network for Network Intrusion Detection [7.562843347215287]
We propose a deep neural network, Pelican, that is built upon specially-designed residual blocks.
Pelican can achieve a high attack detection performance while keeping a much low false alarm rate.
arXiv Detail & Related papers (2020-01-19T05:07:48Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.