Enhancing Network Security: A Hybrid Approach for Detection and Mitigation of Distributed Denial-of-Service Attacks Using Machine Learning
- URL: http://arxiv.org/abs/2503.05477v1
- Date: Fri, 07 Mar 2025 14:47:56 GMT
- Title: Enhancing Network Security: A Hybrid Approach for Detection and Mitigation of Distributed Denial-of-Service Attacks Using Machine Learning
- Authors: Nizo Jaman Shohan, Gazi Tanbhir, Faria Elahi, Ahsan Ullah, Md. Nazmus Sakib,
- Abstract summary: The distributed denial-of-service (DDoS) attack represents an advanced form of the denial-of-service (DoS) attack.<n>We propose a Hybrid Model to strengthen network security by combining the featureextraction abilities of 1D Convolutional Neural Networks (CNNs) with the classification skills of Random Forest (RF) and Multi-layer Perceptron (MLP)<n>We also integrate our machine learning model with Snort, which provides a robust and adaptive solution for detecting and mitigating various DDoS attacks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The distributed denial-of-service (DDoS) attack stands out as a highly formidable cyber threat, representing an advanced form of the denial-of-service (DoS) attack. A DDoS attack involves multiple computers working together to overwhelm a system, making it unavailable. On the other hand, a DoS attack is a one-on-one attempt to make a system or website inaccessible. Thus, it is crucial to construct an effective model for identifying various DDoS incidents. Although extensive research has focused on binary detection models for DDoS identification, they face challenges to adapt evolving threats, necessitating frequent updates. Whereas multiclass detection models offer a comprehensive defense against diverse DDoS attacks, ensuring adaptability in the ever-changing cyber threat landscape. In this paper, we propose a Hybrid Model to strengthen network security by combining the featureextraction abilities of 1D Convolutional Neural Networks (CNNs) with the classification skills of Random Forest (RF) and Multi-layer Perceptron (MLP) classifiers. Using the CIC-DDoS2019 dataset, we perform multiclass classification of various DDoS attacks and conduct a comparative analysis of evaluation metrics for RF, MLP, and our proposed Hybrid Model. After analyzing the results, we draw meaningful conclusions and confirm the superiority of our Hybrid Model by performing thorough cross-validation. Additionally, we integrate our machine learning model with Snort, which provides a robust and adaptive solution for detecting and mitigating various DDoS attacks.
Related papers
- Redefining DDoS Attack Detection Using A Dual-Space Prototypical Network-Based Approach [38.38311259444761]
We introduce a new deep learning-based technique for detecting DDoS attacks.
We propose a new dual-space prototypical network that leverages a unique dual-space loss function.
This approach capitalizes on the strengths of representation learning within the latent space.
arXiv Detail & Related papers (2024-06-04T03:22:52Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - An incremental hybrid adaptive network-based IDS in Software Defined Networks to detect stealth attacks [0.0]
Advanced Persistent Threats (APTs) are a type of attack that implement a wide range of strategies to evade detection.
Machine Learning (ML) techniques in Intrusion Detection Systems (IDSs) is widely used to detect such attacks but has a challenge when the data distribution changes.
An incremental hybrid adaptive Network Intrusion Detection System (NIDS) is proposed to tackle the issue of concept drift in SDN.
arXiv Detail & Related papers (2024-04-01T13:33:40Z) - A Dual-Tier Adaptive One-Class Classification IDS for Emerging Cyberthreats [3.560574387648533]
We propose a one-class classification-driven IDS system structured on two tiers.
The first tier distinguishes between normal activities and attacks/threats, while the second tier determines if the detected attack is known or unknown.
This model not only identifies unseen attacks but also uses them for retraining them by clustering unseen attacks.
arXiv Detail & Related papers (2024-03-17T12:26:30Z) - usfAD Based Effective Unknown Attack Detection Focused IDS Framework [3.560574387648533]
Internet of Things (IoT) and Industrial Internet of Things (IIoT) have led to an increasing range of cyber threats.
For more than a decade, researchers have delved into supervised machine learning techniques to develop Intrusion Detection System (IDS)
IDS trained and tested on known datasets fails in detecting zero-day or unknown attacks.
We propose two strategies for semi-supervised learning based IDS where training samples of attacks are not required.
arXiv Detail & Related papers (2024-03-17T11:49:57Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Adversarial Attack and Defense in Deep Ranking [100.17641539999055]
We propose two attacks against deep ranking systems that can raise or lower the rank of chosen candidates by adversarial perturbations.
Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks.
Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online Products datasets.
arXiv Detail & Related papers (2021-06-07T13:41:45Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - G-IDS: Generative Adversarial Networks Assisted Intrusion Detection
System [1.5119440099674917]
We propose a generative adversarial network (GAN) based intrusion detection system (G-IDS)
G-IDS generates synthetic samples, and IDS gets trained on them along with the original ones.
We find that our proposed G-IDS model performs much better in attack detection and model stabilization during the training process than a standalone IDS.
arXiv Detail & Related papers (2020-06-01T02:42:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.