Evaluating Predictive Models in Cybersecurity: A Comparative Analysis of Machine and Deep Learning Techniques for Threat Detection
- URL: http://arxiv.org/abs/2407.06014v1
- Date: Mon, 8 Jul 2024 15:05:59 GMT
- Title: Evaluating Predictive Models in Cybersecurity: A Comparative Analysis of Machine and Deep Learning Techniques for Threat Detection
- Authors: Momen Hesham, Mohamed Essam, Mohamed Bahaa, Ahmed Mohamed, Mohamed Gomaa, Mena Hany, Wael Elsersy,
- Abstract summary: This paper examines and compares various machine learning as well as deep learning models to choose the most suitable ones for detecting and fighting against cybersecurity risks.
The two datasets are used in the study to assess models like Naive Bayes, SVM, Random Forest, and deep learning architectures, i.e., VGG16, in the context of accuracy, precision, recall, and F1-score.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As these attacks become more and more difficult to see, the need for the great hi-tech models that detect them is undeniable. This paper examines and compares various machine learning as well as deep learning models to choose the most suitable ones for detecting and fighting against cybersecurity risks. The two datasets are used in the study to assess models like Naive Bayes, SVM, Random Forest, and deep learning architectures, i.e., VGG16, in the context of accuracy, precision, recall, and F1-score. Analysis shows that Random Forest and Extra Trees do better in terms of accuracy though in different aspects of the dataset characteristics and types of threat. This research not only emphasizes the strengths and weaknesses of each predictive model but also addresses the difficulties associated with deploying such technologies in the real-world environment, such as data dependency and computational demands. The research findings are targeted at cybersecurity professionals to help them select appropriate predictive models and configure them to strengthen the security measures against cyber threats completely.
Related papers
- Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy [65.80757820884476]
We expose a critical yet underexplored vulnerability in the deployment of unlearning systems.
We present a threat model where an attacker can degrade model accuracy by submitting adversarial unlearning requests for data not present in the training set.
We evaluate various verification mechanisms to detect the legitimacy of unlearning requests and reveal the challenges in verification.
arXiv Detail & Related papers (2024-10-12T16:47:04Z) - Towards Trustworthy Web Attack Detection: An Uncertainty-Aware Ensemble Deep Kernel Learning Model [4.791983040541727]
We propose an Uncertainty-aware Ensemble Deep Kernel Learning (UEDKL) model to detect web attacks.
The proposed UEDKL utilizes a deep kernel learning model to distinguish normal HTTP requests from different types of web attacks.
Experiments on BDCI and SRBH datasets demonstrated that the proposed UEDKL framework yields significant improvement in both web attack detection performance and uncertainty estimation quality.
arXiv Detail & Related papers (2024-10-10T08:47:55Z) - Extending Network Intrusion Detection with Enhanced Particle Swarm Optimization Techniques [0.0]
The present research investigates how to improve Network Intrusion Detection Systems (NIDS) by combining Machine Learning (ML) and Deep Learning (DL) techniques.
The study uses the CSE-CIC-IDS 2018 and LITNET-2020 datasets to compare ML methods (Decision Trees, Random Forest, XGBoost) and DL models (CNNs, RNNs, DNNs) against key performance metrics.
The Decision Tree model performed better across all measures after being fine-tuned with Enhanced Particle Swarm Optimization (EPSO), demonstrating the model's ability to detect network breaches effectively.
arXiv Detail & Related papers (2024-08-14T17:11:36Z) - Comprehensive evaluation of Mal-API-2019 dataset by machine learning in malware detection [0.5475886285082937]
This study conducts a thorough examination of malware detection using machine learning techniques.
The aim is to advance cybersecurity capabilities by identifying and mitigating threats more effectively.
arXiv Detail & Related papers (2024-03-04T17:22:43Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Poisoning Attacks and Defenses on Artificial Intelligence: A Survey [3.706481388415728]
Data poisoning attacks represent a type of attack that consists of tampering the data samples fed to the model during the training phase, leading to a degradation in the models accuracy during the inference phase.
This work compiles the most relevant insights and findings found in the latest existing literatures addressing this type of attacks.
A thorough assessment is performed on the reviewed works, comparing the effects of data poisoning on a wide range of ML models in real-world conditions.
arXiv Detail & Related papers (2022-02-21T14:43:38Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - CyberLearning: Effectiveness Analysis of Machine Learning Security
Modeling to Detect Cyber-Anomalies and Multi-Attacks [5.672898304129217]
"CyberLearning" is a machine learning-based cybersecurity modeling with correlated-feature selection.
We take into account binary classification model for detecting anomalies, and multi-class classification model for various types of cyber-attacks.
We then present the artificial neural network-based security model considering multiple hidden layers.
arXiv Detail & Related papers (2021-03-28T18:47:16Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.