Detection of Distributed Denial of Service Attacks based on Machine Learning Algorithms
- URL: http://arxiv.org/abs/2502.00975v1
- Date: Mon, 03 Feb 2025 01:03:39 GMT
- Title: Detection of Distributed Denial of Service Attacks based on Machine Learning Algorithms
- Authors: Md. Abdur Rahman,
- Abstract summary: We study and apply different Machine Learning (ML) techniques to separate the DDoS attack instances from benign instances.
This paper uses different machine learning techniques for the detection of the attacks efficiently in order to make sure the offered services from web servers available.
- Score: 1.8311368766923968
- License:
- Abstract: Distributed Denial of Service (DDoS) attacks make the challenges to provide the services of the data resources to the web clients. In this paper, we concern to study and apply different Machine Learning (ML) techniques to separate the DDoS attack instances from benign instances. Our experimental results show that forward and backward data bytes of our dataset are observed more similar for DDoS attacks compared to the data bytes for benign attempts. This paper uses different machine learning techniques for the detection of the attacks efficiently in order to make sure the offered services from web servers available. This results from the proposed approach suggest that 97.1% of DDoS attacks are successfully detected by the Support Vector Machine (SVM). These accuracies are better while comparing to the several existing machine learning approaches.
Related papers
- An Efficient Real Time DDoS Detection Model Using Machine Learning Algorithms [0.0]
This research focuses on developing an efficient real-time DDoS detection system using machine learning algorithms.
The research explores the performance of these algorithms in terms of precision, recall and F1-score as well as time complexity.
arXiv Detail & Related papers (2025-01-24T08:11:57Z) - Detecting Distributed Denial of Service Attacks Using Logistic Regression and SVM Methods [0.0]
The goal of this paper is to detect DDoS attacks from all service requests and classify them according to DDoS classes.
Two (2) different machine learning approaches, SVM and Logistic Regression, are implemented in the dataset for detecting and classifying DDoS attacks.
Logistic Regression and SVM both achieve 98.65% classification accuracy which is the highest achieved accuracy among other previous experiments with the same dataset.
arXiv Detail & Related papers (2024-11-21T13:15:26Z) - Defense Against Prompt Injection Attack by Leveraging Attack Techniques [66.65466992544728]
Large language models (LLMs) have achieved remarkable performance across various natural language processing (NLP) tasks.
As LLMs continue to evolve, new vulnerabilities, especially prompt injection attacks arise.
Recent attack methods leverage LLMs' instruction-following abilities and their inabilities to distinguish instructions injected in the data content.
arXiv Detail & Related papers (2024-11-01T09:14:21Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - Predict And Prevent DDOS Attacks Using Machine Learning and Statistical Algorithms [0.0]
This study uses several machine learning and statistical models to detect DDoS attacks from traces of traffic flow.
The XGboost machine learning model provided the best detection accuracy of (99.9999%) after applying the SMOTE approach to the target class.
arXiv Detail & Related papers (2023-08-30T00:03:32Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - Explaining Machine Learning DGA Detectors from DNS Traffic Data [11.049278217301048]
This work addresses the problem of Explainable ML in the context of botnet and DGA detection.
It is the first to concretely break down the decisions of ML classifiers when devised for botnet/DGA detection.
arXiv Detail & Related papers (2022-08-10T11:34:26Z) - Attribution of Gradient Based Adversarial Attacks for Reverse
Engineering of Deceptions [16.23543028393521]
We present two techniques that support automated identification and attribution of adversarial ML attack toolchains.
To the best of our knowledge, this is the first approach to attribute gradient based adversarial attacks and estimate their parameters.
arXiv Detail & Related papers (2021-03-19T19:55:00Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.