An Isolation Forest Learning Based Outlier Detection Approach for
Effectively Classifying Cyber Anomalies
- URL: http://arxiv.org/abs/2101.03141v1
- Date: Wed, 9 Dec 2020 05:09:52 GMT
- Title: An Isolation Forest Learning Based Outlier Detection Approach for
Effectively Classifying Cyber Anomalies
- Authors: Rony Chowdhury Ripan, Iqbal H. Sarker, Md Musfique Anwar, Md. Hasan
Furhad, Fazle Rahat, Mohammed Moshiul Hoque and Muhammad Sarfraz
- Abstract summary: We present an Isolation Forest Learning-Based Outlier Detection Model for effectively classifying cyber anomalies.
Experimental results show that the classification accuracy of cyber anomalies has been improved after removing outliers.
- Score: 2.2628381865476115
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Cybersecurity has recently gained considerable interest in today's security
issues because of the popularity of the Internet-of-Things (IoT), the
considerable growth of mobile networks, and many related apps. Therefore,
detecting numerous cyber-attacks in a network and creating an effective
intrusion detection system plays a vital role in today's security. In this
paper, we present an Isolation Forest Learning-Based Outlier Detection Model
for effectively classifying cyber anomalies. In order to evaluate the efficacy
of the resulting Outlier Detection model, we also use several conventional
machine learning approaches, such as Logistic Regression (LR), Support Vector
Machine (SVM), AdaBoost Classifier (ABC), Naive Bayes (NB), and K-Nearest
Neighbor (KNN). The effectiveness of our proposed Outlier Detection model is
evaluated by conducting experiments on Network Intrusion Dataset with
evaluation metrics such as precision, recall, F1-score, and accuracy.
Experimental results show that the classification accuracy of cyber anomalies
has been improved after removing outliers.
Related papers
- Multi-agent Reinforcement Learning-based Network Intrusion Detection System [3.4636217357968904]
Intrusion Detection Systems (IDS) play a crucial role in ensuring the security of computer networks.
We propose a novel multi-agent reinforcement learning (RL) architecture, enabling automatic, efficient, and robust network intrusion detection.
Our solution introduces a resilient architecture designed to accommodate the addition of new attacks and effectively adapt to changes in existing attack patterns.
arXiv Detail & Related papers (2024-07-08T09:18:59Z) - Performance evaluation of Machine learning algorithms for Intrusion Detection System [0.40964539027092917]
This paper focuses on intrusion detection systems (IDSs) analysis using Machine Learning (ML) techniques.
We analyze the KDD CUP-'99' intrusion detection dataset used for training and validating ML models.
arXiv Detail & Related papers (2023-10-01T06:35:37Z) - Few-shot Weakly-supervised Cybersecurity Anomaly Detection [1.179179628317559]
We propose an enhancement to an existing few-shot weakly-supervised deep learning anomaly detection framework.
This framework incorporates data augmentation, representation learning and ordinal regression.
We then evaluated and showed the performance of our implemented framework on three benchmark datasets.
arXiv Detail & Related papers (2023-04-15T04:37:54Z) - AUTO: Adaptive Outlier Optimization for Online Test-Time OOD Detection [81.49353397201887]
Out-of-distribution (OOD) detection is crucial to deploying machine learning models in open-world applications.
We introduce a novel paradigm called test-time OOD detection, which utilizes unlabeled online data directly at test time to improve OOD detection performance.
We propose adaptive outlier optimization (AUTO), which consists of an in-out-aware filter, an ID memory bank, and a semantically-consistent objective.
arXiv Detail & Related papers (2023-03-22T02:28:54Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Anomaly Detection in Cybersecurity: Unsupervised, Graph-Based and
Supervised Learning Methods in Adversarial Environments [63.942632088208505]
Inherent to today's operating environment is the practice of adversarial machine learning.
In this work, we examine the feasibility of unsupervised learning and graph-based methods for anomaly detection.
We incorporate a realistic adversarial training mechanism when training our supervised models to enable strong classification performance in adversarial environments.
arXiv Detail & Related papers (2021-05-14T10:05:10Z) - ADASYN-Random Forest Based Intrusion Detection Model [0.0]
Intrusion detection has been a key topic in the field of cyber security, and the common network threats nowadays have the characteristics of varieties and variation.
Considering the serious imbalance of intrusion detection datasets, using ADASYN oversampling method to balance datasets was proposed.
It has better performance, generalization ability and robustness compared with traditional machine learning models.
arXiv Detail & Related papers (2021-05-10T12:22:36Z) - CyberLearning: Effectiveness Analysis of Machine Learning Security
Modeling to Detect Cyber-Anomalies and Multi-Attacks [5.672898304129217]
"CyberLearning" is a machine learning-based cybersecurity modeling with correlated-feature selection.
We take into account binary classification model for detecting anomalies, and multi-class classification model for various types of cyber-attacks.
We then present the artificial neural network-based security model considering multiple hidden layers.
arXiv Detail & Related papers (2021-03-28T18:47:16Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.