UNBUS: Uncertainty-aware Deep Botnet Detection System in Presence of
Perturbed Samples
- URL: http://arxiv.org/abs/2204.09502v1
- Date: Mon, 18 Apr 2022 21:49:14 GMT
- Title: UNBUS: Uncertainty-aware Deep Botnet Detection System in Presence of
Perturbed Samples
- Authors: Rahim Taheri
- Abstract summary: Botnet detection requires extremely low false-positive rates (FPR), which are not commonly attainable in contemporary deep learning.
In this paper, two LSTM-based classification algorithms for botnet classification with an accuracy higher than 98% are presented.
- Score: 1.2691047660244335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A rising number of botnet families have been successfully detected using deep
learning architectures. While the variety of attacks increases, these
architectures should become more robust against attacks. They have been proven
to be very sensitive to small but well constructed perturbations in the input.
Botnet detection requires extremely low false-positive rates (FPR), which are
not commonly attainable in contemporary deep learning. Attackers try to
increase the FPRs by making poisoned samples. The majority of recent research
has focused on the use of model loss functions to build adversarial examples
and robust models. In this paper, two LSTM-based classification algorithms for
botnet classification with an accuracy higher than 98\% are presented. Then,
the adversarial attack is proposed, which reduces the accuracy to about30\%.
Then, by examining the methods for computing the uncertainty, the defense
method is proposed to increase the accuracy to about 70\%. By using the deep
ensemble and stochastic weight averaging quantification methods it has been
investigated the uncertainty of the accuracy in the proposed methods.
Related papers
- Towards Robust IoT Defense: Comparative Statistics of Attack Detection in Resource-Constrained Scenarios [1.3812010983144802]
Resource constraints pose a significant cybersecurity threat to IoT smart devices.
We conduct an extensive statistical analysis of cyberattack detection algorithms under resource constraints to identify the most efficient one.
arXiv Detail & Related papers (2024-10-10T10:58:03Z) - Comprehensive Botnet Detection by Mitigating Adversarial Attacks, Navigating the Subtleties of Perturbation Distances and Fortifying Predictions with Conformal Layers [1.6001193161043425]
Botnets are computer networks controlled by malicious actors that present significant cybersecurity challenges.
This research addresses the sophisticated adversarial manipulations posed by attackers, aiming to undermine machine learning-based botnet detection systems.
We introduce a flow-based detection approach, leveraging machine learning and deep learning algorithms trained on the ISCX and ISOT datasets.
arXiv Detail & Related papers (2024-09-01T08:53:21Z) - Wasserstein distributional robustness of neural networks [9.79503506460041]
Deep neural networks are known to be vulnerable to adversarial attacks (AA)
For an image recognition task, this means that a small perturbation of the original can result in the image being misclassified.
We re-cast the problem using techniques of Wasserstein distributionally robust optimization (DRO) and obtain novel contributions.
arXiv Detail & Related papers (2023-06-16T13:41:24Z) - Improving Botnet Detection with Recurrent Neural Network and Transfer
Learning [5.602292536933117]
Botnet detection is a critical step in stopping the spread of botnets and preventing malicious activities.
Recent approaches employing machine learning (ML) showed improved performance than earlier ones.
We propose a novel botnet detection method, built upon Recurrent Variational Autoencoder (RVAE)
arXiv Detail & Related papers (2021-04-26T14:05:01Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Provably Robust Metric Learning [98.50580215125142]
We show that existing metric learning algorithms can result in metrics that are less robust than the Euclidean distance.
We propose a novel metric learning algorithm to find a Mahalanobis distance that is robust against adversarial perturbations.
Experimental results show that the proposed metric learning algorithm improves both certified robust errors and empirical robust errors.
arXiv Detail & Related papers (2020-06-12T09:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.