Early detection of the advanced persistent threat attack using
performance analysis of deep learning
- URL: http://arxiv.org/abs/2009.10524v1
- Date: Sat, 19 Sep 2020 19:27:35 GMT
- Title: Early detection of the advanced persistent threat attack using
performance analysis of deep learning
- Authors: Javad Hassannataj Joloudari, Mojtaba Haderbadi, Amir Mashmool,
Mohammad GhasemiGol, Shahab S., Amir Mosavi
- Abstract summary: Advanced Persistent Threat (APT)-attack is one of the most common and important destructive attacks on the victim system.
One of the solutions to detect a secret APT attack is using network traffic.
In this study, machine learning methods such as C5.0 decision tree, Bayesian network and deep neural network are used for timely detection and classification of APT-attacks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the most common and important destructive attacks on the victim system
is Advanced Persistent Threat (APT)-attack. The APT attacker can achieve his
hostile goals by obtaining information and gaining financial benefits regarding
the infrastructure of a network. One of the solutions to detect a secret APT
attack is using network traffic. Due to the nature of the APT attack in terms
of being on the network for a long time and the fact that the network may crash
because of high traffic, it is difficult to detect this type of attack. Hence,
in this study, machine learning methods such as C5.0 decision tree, Bayesian
network and deep neural network are used for timely detection and
classification of APT-attacks on the NSL-KDD dataset. Moreover, 10-fold cross
validation method is used to experiment these models. As a result, the accuracy
(ACC) of the C5.0 decision tree, Bayesian network and 6-layer deep learning
models is obtained as 95.64%, 88.37% and 98.85%, respectively, and also, in
terms of the important criterion of the false positive rate (FPR), the FPR
value for the C5.0 decision tree, Bayesian network and 6-layer deep learning
models is obtained as 2.56, 10.47 and 1.13, respectively. Other criterions such
as sensitivity, specificity, accuracy, false negative rate and F-measure are
also investigated for the models, and the experimental results show that the
deep learning model with automatic multi-layered extraction of features has the
best performance for timely detection of an APT-attack comparing to other
classification models.
Related papers
- Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy [65.80757820884476]
We expose a critical yet underexplored vulnerability in the deployment of unlearning systems.
We present a threat model where an attacker can degrade model accuracy by submitting adversarial unlearning requests for data not present in the training set.
We evaluate various verification mechanisms to detect the legitimacy of unlearning requests and reveal the challenges in verification.
arXiv Detail & Related papers (2024-10-12T16:47:04Z) - EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection [19.885698402507145]
Adversarial examples can exploit vulnerabilities within deep neural networks.
This study showcases the susceptibility of deep learning models to adversarial attacks, which can achieve 100% attack success rate.
arXiv Detail & Related papers (2024-07-27T09:04:54Z) - Deep Neural Networks based Meta-Learning for Network Intrusion Detection [0.24466725954625884]
digitization of different components of industry and inter-connectivity among indigenous networks have increased the risk of network attacks.
Data used to construct a predictive model for computer networks has a skewed class distribution and limited representation of attack types.
We propose a novel deep neural network based Meta-Learning framework; INformation FUsion and Stacking Ensemble (INFUSE) for network intrusion detection.
arXiv Detail & Related papers (2023-02-18T18:00:05Z) - UNBUS: Uncertainty-aware Deep Botnet Detection System in Presence of
Perturbed Samples [1.2691047660244335]
Botnet detection requires extremely low false-positive rates (FPR), which are not commonly attainable in contemporary deep learning.
In this paper, two LSTM-based classification algorithms for botnet classification with an accuracy higher than 98% are presented.
arXiv Detail & Related papers (2022-04-18T21:49:14Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - ADASYN-Random Forest Based Intrusion Detection Model [0.0]
Intrusion detection has been a key topic in the field of cyber security, and the common network threats nowadays have the characteristics of varieties and variation.
Considering the serious imbalance of intrusion detection datasets, using ADASYN oversampling method to balance datasets was proposed.
It has better performance, generalization ability and robustness compared with traditional machine learning models.
arXiv Detail & Related papers (2021-05-10T12:22:36Z) - Performance Evaluation of Machine Learning Techniques for DoS Detection
in Wireless Sensor Network [0.0]
This paper conducted an experiment using Waikato Environment for Knowledge Analysis (WEKA) to evaluate the efficiency of five machine learning algorithms for detecting flooding, grayhole, blackhole, and scheduling at DoS attacks in WSNs.
The results showed that the random forest classifier outperforms the other classifiers with an accuracy of 99.72%.
arXiv Detail & Related papers (2021-04-05T15:31:27Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.