ADASYN-Random Forest Based Intrusion Detection Model
- URL: http://arxiv.org/abs/2105.04301v1
- Date: Mon, 10 May 2021 12:22:36 GMT
- Title: ADASYN-Random Forest Based Intrusion Detection Model
- Authors: Zhewei Chen, Linyue Zhou, Wenwen Yu
- Abstract summary: Intrusion detection has been a key topic in the field of cyber security, and the common network threats nowadays have the characteristics of varieties and variation.
Considering the serious imbalance of intrusion detection datasets, using ADASYN oversampling method to balance datasets was proposed.
It has better performance, generalization ability and robustness compared with traditional machine learning models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intrusion detection has been a key topic in the field of cyber security, and
the common network threats nowadays have the characteristics of varieties and
variation. Considering the serious imbalance of intrusion detection datasets
will result in low classification performance on attack behaviors of small
sample size and difficulty to detect network attacks accurately and
efficiently, using ADASYN oversampling method to balance datasets was proposed
in this paper. In addition, random forest algorithm was used to train intrusion
detection classifiers. Through the comparative experiment of Intrusion
detection on CICIDS 2017 dataset, it is found that ADASYN with Random Forest
performs better. Based on the experimental results, the improvement of
precision, recall and F1 values after ADASYN is then analyzed. Experiments show
that the proposed method can be applied to intrusion detection with large data,
and can effectively improve the classification accuracy of network attack
behaviors. Compared with traditional machine learning models, it has better
performance, generalization ability and robustness.
Related papers
- Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - Secure Hierarchical Federated Learning in Vehicular Networks Using Dynamic Client Selection and Anomaly Detection [10.177917426690701]
Hierarchical Federated Learning (HFL) faces the challenge of adversarial or unreliable vehicles in vehicular networks.
Our study introduces a novel framework that integrates dynamic vehicle selection and robust anomaly detection mechanisms.
Our proposed algorithm demonstrates remarkable resilience even under intense attack conditions.
arXiv Detail & Related papers (2024-05-25T18:31:20Z) - Performance evaluation of Machine learning algorithms for Intrusion Detection System [0.40964539027092917]
This paper focuses on intrusion detection systems (IDSs) analysis using Machine Learning (ML) techniques.
We analyze the KDD CUP-'99' intrusion detection dataset used for training and validating ML models.
arXiv Detail & Related papers (2023-10-01T06:35:37Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - TracInAD: Measuring Influence for Anomaly Detection [0.0]
This paper proposes a novel methodology to flag anomalies based on TracIn.
We test our approach using Variational Autoencoders and show that the average influence of a subsample of training points on a test point can serve as a proxy for abnormality.
arXiv Detail & Related papers (2022-05-03T08:20:15Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - An Isolation Forest Learning Based Outlier Detection Approach for
Effectively Classifying Cyber Anomalies [2.2628381865476115]
We present an Isolation Forest Learning-Based Outlier Detection Model for effectively classifying cyber anomalies.
Experimental results show that the classification accuracy of cyber anomalies has been improved after removing outliers.
arXiv Detail & Related papers (2020-12-09T05:09:52Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.