DDoSDet: An approach to Detect DDoS attacks using Neural Networks
- URL: http://arxiv.org/abs/2201.09514v1
- Date: Mon, 24 Jan 2022 08:16:16 GMT
- Title: DDoSDet: An approach to Detect DDoS attacks using Neural Networks
- Authors: Aman Rangapur, Tarun Kanakam, Ajith Jubilson
- Abstract summary: In this research paper, we present the detection of DDoS attacks using neural networks.
We compared and assessed our suggested system against current models in the field.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Cyber-attacks have been one of the deadliest attacks in today's world. One of
them is DDoS (Distributed Denial of Services). It is a cyber-attack in which
the attacker attacks and makes a network or a machine unavailable to its
intended users temporarily or indefinitely, interrupting services of the host
that are connected to a network. To define it in simple terms, It's an attack
accomplished by flooding the target machine with unnecessary requests in an
attempt to overload and make the systems crash and make the users unable to use
that network or a machine. In this research paper, we present the detection of
DDoS attacks using neural networks, that would flag malicious and legitimate
data flow, preventing network performance degradation. We compared and assessed
our suggested system against current models in the field. We are glad to note
that our work was 99.7\% accurate.
Related papers
- Predict And Prevent DDOS Attacks Using Machine Learning and Statistical Algorithms [0.0]
This study uses several machine learning and statistical models to detect DDoS attacks from traces of traffic flow.
The XGboost machine learning model provided the best detection accuracy of (99.9999%) after applying the SMOTE approach to the target class.
arXiv Detail & Related papers (2023-08-30T00:03:32Z) - Synthesis of Adversarial DDOS Attacks Using Tabular Generative
Adversarial Networks [0.0]
New types of attacks stand out as the technology of attacks keep evolving.
One of these attacks are the attacks based on Generative Adversarial Networks (GAN) that can evade machine learning IDS leaving them vulnerable.
This project investigates the impact of the Adversarial Attacks synthesized using real DDoS attacks generated using GANs on the IDS.
arXiv Detail & Related papers (2022-12-14T18:55:04Z) - Zero-day DDoS Attack Detection [0.0]
This project aims to solve the task of detecting zero-day DDoS attacks by utilizing network traffic that is captured before entering a private network.
Modern feature extraction techniques are used in conjunction with neural networks to determine if a network packet is either benign or malicious.
arXiv Detail & Related papers (2022-08-31T17:14:43Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - Dataset: Large-scale Urban IoT Activity Data for DDoS Attack Emulation [7.219077740523682]
Large-scale IoT device networks are susceptible to being hijacked and used as botnets to launch distributed denial of service (DDoS) attacks.
We present a dataset from an urban IoT deployment of 4060 nodes describing their deployment-temporal activity under benign conditions.
We also provide a synthetic DDoS attack generator that injects attack activity into the dataset based on parameters such as number of nodes attacked and duration of attack.
arXiv Detail & Related papers (2021-10-05T06:34:58Z) - Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
Trained from Scratch [99.90716010490625]
Backdoor attackers tamper with training data to embed a vulnerability in models that are trained on that data.
This vulnerability is then activated at inference time by placing a "trigger" into the model's input.
We develop a new hidden trigger attack, Sleeper Agent, which employs gradient matching, data selection, and target model re-training during the crafting process.
arXiv Detail & Related papers (2021-06-16T17:09:55Z) - IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for
Visual Object Tracking [70.14487738649373]
Adrial attack arises due to the vulnerability of deep neural networks to perceive input samples injected with imperceptible perturbations.
We propose a decision-based black-box attack method for visual object tracking.
We validate the proposed IoU attack on state-of-the-art deep trackers.
arXiv Detail & Related papers (2021-03-27T16:20:32Z) - TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack [46.79557381882643]
We present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack.
Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets.
TANTRA achieves an average success rate of 99.99% in network intrusion detection system evasion.
arXiv Detail & Related papers (2021-03-10T19:03:38Z) - A Targeted Attack on Black-Box Neural Machine Translation with Parallel
Data Poisoning [60.826628282900955]
We show that targeted attacks on black-box NMT systems are feasible, based on poisoning a small fraction of their parallel training data.
We show that this attack can be realised practically via targeted corruption of web documents crawled to form the system's training data.
Our results are alarming: even on the state-of-the-art systems trained with massive parallel data, the attacks are still successful (over 50% success rate) under surprisingly low poisoning budgets.
arXiv Detail & Related papers (2020-11-02T01:52:46Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.