Unsupervised Anomaly Detectors to Detect Intrusions in the Current
Threat Landscape
- URL: http://arxiv.org/abs/2012.11354v1
- Date: Mon, 21 Dec 2020 14:06:58 GMT
- Title: Unsupervised Anomaly Detectors to Detect Intrusions in the Current
Threat Landscape
- Authors: Tommaso Zoppi, Andrea ceccarelli, Tommaso Capecchi, Andrea Bondavalli
- Abstract summary: We show that Isolation Forests, One-Class Support Vector Machines and Self-Organizing Maps are more effective than their counterparts for intrusion detection.
We detail how attacks with unstable, distributed or non-repeatable behavior as Fuzzing, Worms and Botnets are more difficult to detect.
- Score: 0.11470070927586014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anomaly detection aims at identifying unexpected fluctuations in the expected
behavior of a given system. It is acknowledged as a reliable answer to the
identification of zero-day attacks to such extent, several ML algorithms that
suit for binary classification have been proposed throughout years. However,
the experimental comparison of a wide pool of unsupervised algorithms for
anomaly-based intrusion detection against a comprehensive set of attacks
datasets was not investigated yet. To fill such gap, we exercise seventeen
unsupervised anomaly detection algorithms on eleven attack datasets. Results
allow elaborating on a wide range of arguments, from the behavior of the
individual algorithm to the suitability of the datasets to anomaly detection.
We conclude that algorithms as Isolation Forests, One-Class Support Vector
Machines and Self-Organizing Maps are more effective than their counterparts
for intrusion detection, while clustering algorithms represent a good
alternative due to their low computational complexity. Further, we detail how
attacks with unstable, distributed or non-repeatable behavior as Fuzzing, Worms
and Botnets are more difficult to detect. Ultimately, we digress on
capabilities of algorithms in detecting anomalies generated by a wide pool of
unknown attacks, showing that achieved metric scores do not vary with respect
to identifying single attacks.
Related papers
- Secure Hierarchical Federated Learning in Vehicular Networks Using Dynamic Client Selection and Anomaly Detection [10.177917426690701]
Hierarchical Federated Learning (HFL) faces the challenge of adversarial or unreliable vehicles in vehicular networks.
Our study introduces a novel framework that integrates dynamic vehicle selection and robust anomaly detection mechanisms.
Our proposed algorithm demonstrates remarkable resilience even under intense attack conditions.
arXiv Detail & Related papers (2024-05-25T18:31:20Z) - Bagged Regularized $k$-Distances for Anomaly Detection [9.899763598214122]
We propose a new distance-based algorithm called bagged regularized $k$-distances for anomaly detection (BRDAD)
Our BRDAD algorithm selects the weights by minimizing the surrogate risk, i.e., the finite sample bound of the empirical risk of the bagged weighted $k$-distances for density estimation (BWDDE)
On the theoretical side, we establish fast convergence rates of the AUC regret of our algorithm and demonstrate that the bagging technique significantly reduces the computational complexity.
arXiv Detail & Related papers (2023-12-02T07:00:46Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - TBDetector:Transformer-Based Detector for Advanced Persistent Threats
with Provenance Graph [17.518551273453888]
We propose TBDetector, a transformer-based advanced persistent threat detection method for APT attack detection.
Provenance graphs provide rich historical information and have the powerful attacks historic correlation ability.
To evaluate the effectiveness of the proposed method, we have conducted experiments on five public datasets.
arXiv Detail & Related papers (2023-04-06T03:08:09Z) - A Revealing Large-Scale Evaluation of Unsupervised Anomaly Detection
Algorithms [0.0]
Anomaly detection has many applications ranging from bank-fraud detection and cyber-threat detection to equipment maintenance and health monitoring.
We extensively reviewed twelve of the most popular unsupervised anomaly detection methods.
arXiv Detail & Related papers (2022-04-21T00:17:12Z) - Anomaly Rule Detection in Sequence Data [2.3757190901941736]
We present a new anomaly detection framework called DUOS that enables Discovery of Utility-aware Outlier Sequential rules from a set of sequences.
In this work, we incorporate both the anomalousness and utility of a group, and then introduce the concept of utility-aware outlier rule (UOSR)
arXiv Detail & Related papers (2021-11-29T23:52:31Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.