Dataset: Large-scale Urban IoT Activity Data for DDoS Attack Emulation
- URL: http://arxiv.org/abs/2110.01842v1
- Date: Tue, 5 Oct 2021 06:34:58 GMT
- Title: Dataset: Large-scale Urban IoT Activity Data for DDoS Attack Emulation
- Authors: Arvin Hekmati, Eugenio Grippo, Bhaskar Krishnamachari
- Abstract summary: Large-scale IoT device networks are susceptible to being hijacked and used as botnets to launch distributed denial of service (DDoS) attacks.
We present a dataset from an urban IoT deployment of 4060 nodes describing their deployment-temporal activity under benign conditions.
We also provide a synthetic DDoS attack generator that injects attack activity into the dataset based on parameters such as number of nodes attacked and duration of attack.
- Score: 7.219077740523682
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As IoT deployments grow in scale for applications such as smart cities, they
face increasing cyber-security threats. In particular, as evidenced by the
famous Mirai incident and other ongoing threats, large-scale IoT device
networks are particularly susceptible to being hijacked and used as botnets to
launch distributed denial of service (DDoS) attacks. Real large-scale datasets
are needed to train and evaluate the use of machine learning algorithms such as
deep neural networks to detect and defend against such DDoS attacks. We present
a dataset from an urban IoT deployment of 4060 nodes describing their
spatio-temporal activity under benign conditions. We also provide a synthetic
DDoS attack generator that injects attack activity into the dataset based on
tunable parameters such as number of nodes attacked and duration of attack. We
discuss some of the features of the dataset. We also demonstrate the utility of
the dataset as well as our synthetic DDoS attack generator by using them for
the training and evaluation of a simple multi-label feed-forward neural network
that aims to identify which nodes are under attack and when.
Related papers
- Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - 5G Networks and IoT Devices: Mitigating DDoS Attacks with Deep Learning
Techniques [0.0]
Internet of Things (IoT) devices have been accelerated dramatically in recent years.
As a result, a super-network is required to handle the massive volumes of data collected and transmitted to these devices.
Deep Learning techniques have proven their effectiveness in detecting and mitigating DDoS attacks.
arXiv Detail & Related papers (2023-11-12T19:50:49Z) - A Novel Supervised Deep Learning Solution to Detect Distributed Denial
of Service (DDoS) attacks on Edge Systems using Convolutional Neural Networks
(CNN) [0.41436032949434404]
This project presents a novel deep learning-based approach for detecting DDoS attacks in network traffic.
The algorithm employed in this study exploits the properties of Convolutional Neural Networks (CNN) and common deep learning algorithms.
The results of this study demonstrate the effectiveness of the proposed algorithm in detecting DDOS attacks, achieving an accuracy of.9883 on 2000 unseen flows in network traffic.
arXiv Detail & Related papers (2023-09-11T17:37:35Z) - Poisoning Web-Scale Training Datasets is Practical [73.34964403079775]
We introduce two new dataset poisoning attacks that intentionally introduce malicious examples to a model's performance.
First attack, split-view poisoning, exploits the mutable nature of internet content to ensure a dataset annotator's initial view of the dataset differs from the view downloaded by subsequent clients.
Second attack, frontrunning poisoning, targets web-scale datasets that periodically snapshot crowd-sourced content.
arXiv Detail & Related papers (2023-02-20T18:30:54Z) - NFDLM: A Lightweight Network Flow based Deep Learning Model for DDoS
Attack Detection in IoT Domains [0.13999481573773068]
This study proposes NFDLM, a lightweight and optimised Artificial Neural Network (ANN) based Distributed Denial of Services (DDoS) attack detection framework.
Overall, the detection performance achieves approximately 99% accuracy for the detection of attacks from botnets.
arXiv Detail & Related papers (2022-07-15T14:09:08Z) - DDoSDet: An approach to Detect DDoS attacks using Neural Networks [0.0]
In this research paper, we present the detection of DDoS attacks using neural networks.
We compared and assessed our suggested system against current models in the field.
arXiv Detail & Related papers (2022-01-24T08:16:16Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for
Visual Object Tracking [70.14487738649373]
Adrial attack arises due to the vulnerability of deep neural networks to perceive input samples injected with imperceptible perturbations.
We propose a decision-based black-box attack method for visual object tracking.
We validate the proposed IoU attack on state-of-the-art deep trackers.
arXiv Detail & Related papers (2021-03-27T16:20:32Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Timely Detection and Mitigation of Stealthy DDoS Attacks via IoT
Networks [30.68108039722565]
Internet of Things (IoT) devices are susceptible to being compromised and being part of a new type of stealthy Distributed Denial of Service (DDoS) attack, called Mongolian DDoS.
This study proposes a novel anomaly-based Intrusion Detection System (IDS) that is capable of timely detecting and mitigating this emerging type of DDoS attacks.
arXiv Detail & Related papers (2020-06-15T00:54:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.