CyberLearning: Effectiveness Analysis of Machine Learning Security
Modeling to Detect Cyber-Anomalies and Multi-Attacks
- URL: http://arxiv.org/abs/2104.08080v1
- Date: Sun, 28 Mar 2021 18:47:16 GMT
- Title: CyberLearning: Effectiveness Analysis of Machine Learning Security
Modeling to Detect Cyber-Anomalies and Multi-Attacks
- Authors: Iqbal H. Sarker
- Abstract summary: "CyberLearning" is a machine learning-based cybersecurity modeling with correlated-feature selection.
We take into account binary classification model for detecting anomalies, and multi-class classification model for various types of cyber-attacks.
We then present the artificial neural network-based security model considering multiple hidden layers.
- Score: 5.672898304129217
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting cyber-anomalies and attacks are becoming a rising concern these
days in the domain of cybersecurity. The knowledge of artificial intelligence,
particularly, the machine learning techniques can be used to tackle these
issues. However, the effectiveness of a learning-based security model may vary
depending on the security features and the data characteristics. In this paper,
we present "CyberLearning", a machine learning-based cybersecurity modeling
with correlated-feature selection, and a comprehensive empirical analysis on
the effectiveness of various machine learning based security models. In our
CyberLearning modeling, we take into account a binary classification model for
detecting anomalies, and multi-class classification model for various types of
cyber-attacks. To build the security model, we first employ the popular ten
machine learning classification techniques, such as naive Bayes, Logistic
regression, Stochastic gradient descent, K-nearest neighbors, Support vector
machine, Decision Tree, Random Forest, Adaptive Boosting, eXtreme Gradient
Boosting, as well as Linear discriminant analysis. We then present the
artificial neural network-based security model considering multiple hidden
layers. The effectiveness of these learning-based security models is examined
by conducting a range of experiments utilizing the two most popular security
datasets, UNSW-NB15 and NSL-KDD. Overall, this paper aims to serve as a
reference point for data-driven security modeling through our experimental
analysis and findings in the context of cybersecurity.
Related papers
- Model-agnostic clean-label backdoor mitigation in cybersecurity environments [6.857489153636145]
Recent research has surfaced a series of insidious training-time attacks that inject backdoors in models designed for security classification tasks.
We propose new techniques that leverage insights in cybersecurity threat models to effectively mitigate these clean-label poisoning attacks.
arXiv Detail & Related papers (2024-07-11T03:25:40Z) - Evaluating Predictive Models in Cybersecurity: A Comparative Analysis of Machine and Deep Learning Techniques for Threat Detection [0.0]
This paper examines and compares various machine learning as well as deep learning models to choose the most suitable ones for detecting and fighting against cybersecurity risks.
The two datasets are used in the study to assess models like Naive Bayes, SVM, Random Forest, and deep learning architectures, i.e., VGG16, in the context of accuracy, precision, recall, and F1-score.
arXiv Detail & Related papers (2024-07-08T15:05:59Z) - An Approach to Abstract Multi-stage Cyberattack Data Generation for ML-Based IDS in Smart Grids [2.5655761752240505]
We propose a method to generate synthetic data using a graph-based approach for training machine learning models in smart grids.
We use an abstract form of multi-stage cyberattacks defined via graph formulations and simulate the propagation behavior of attacks in the network.
arXiv Detail & Related papers (2023-12-21T11:07:51Z) - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models [74.58014281829946]
We analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on public models.
Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models.
arXiv Detail & Related papers (2023-10-19T11:49:22Z) - DST: Dynamic Substitute Training for Data-free Black-box Attack [79.61601742693713]
We propose a novel dynamic substitute training attack method to encourage substitute model to learn better and faster from the target model.
We introduce a task-driven graph-based structure information learning constrain to improve the quality of generated training data.
arXiv Detail & Related papers (2022-04-03T02:29:11Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Learning to Detect: A Data-driven Approach for Network Intrusion
Detection [17.288512506016612]
We perform a comprehensive study on NSL-KDD, a network traffic dataset, by visualizing patterns and employing different learning-based models to detect cyber attacks.
Unlike previous shallow learning and deep learning models that use the single learning model approach for intrusion detection, we adopt a hierarchy strategy.
We demonstrate the advantage of the unsupervised representation learning model in binary intrusion detection tasks.
arXiv Detail & Related papers (2021-08-18T21:19:26Z) - Anomaly Detection in Cybersecurity: Unsupervised, Graph-Based and
Supervised Learning Methods in Adversarial Environments [63.942632088208505]
Inherent to today's operating environment is the practice of adversarial machine learning.
In this work, we examine the feasibility of unsupervised learning and graph-based methods for anomaly detection.
We incorporate a realistic adversarial training mechanism when training our supervised models to enable strong classification performance in adversarial environments.
arXiv Detail & Related papers (2021-05-14T10:05:10Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - An Isolation Forest Learning Based Outlier Detection Approach for
Effectively Classifying Cyber Anomalies [2.2628381865476115]
We present an Isolation Forest Learning-Based Outlier Detection Model for effectively classifying cyber anomalies.
Experimental results show that the classification accuracy of cyber anomalies has been improved after removing outliers.
arXiv Detail & Related papers (2020-12-09T05:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.