Defending Active Directory by Combining Neural Network based Dynamic
Program and Evolutionary Diversity Optimisation
- URL: http://arxiv.org/abs/2204.03397v3
- Date: Wed, 4 Jan 2023 12:39:30 GMT
- Title: Defending Active Directory by Combining Neural Network based Dynamic
Program and Evolutionary Diversity Optimisation
- Authors: Diksha Goel, Max Ward, Aneta Neumann, Frank Neumann, Hung Nguyen,
Mingyu Guo
- Abstract summary: We study a Stackelberg game model between one attacker and one defender on an AD attack graph.
The attacker aims to maximize their chance of successfully reaching the destination before getting detected.
The defender's task is to block a constant number of edges to decrease the attacker's chance of success.
- Score: 14.326083603965278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active Directory (AD) is the default security management system for Windows
domain networks. We study a Stackelberg game model between one attacker and one
defender on an AD attack graph. The attacker initially has access to a set of
entry nodes. The attacker can expand this set by strategically exploring edges.
Every edge has a detection rate and a failure rate. The attacker aims to
maximize their chance of successfully reaching the destination before getting
detected. The defender's task is to block a constant number of edges to
decrease the attacker's chance of success. We show that the problem is #P-hard
and, therefore, intractable to solve exactly. We convert the attacker's problem
to an exponential sized Dynamic Program that is approximated by a Neural
Network (NN). Once trained, the NN provides an efficient fitness function for
the defender's Evolutionary Diversity Optimisation (EDO). The diversity
emphasis on the defender's solution provides a diverse set of training samples,
which improves the training accuracy of our NN for modelling the attacker. We
go back and forth between NN training and EDO. Experimental results show that
for R500 graph, our proposed EDO based defense is less than 1% away from the
optimal defense.
Related papers
- Optimizing Cyber Response Time on Temporal Active Directory Networks Using Decoys [4.2671394819888455]
We study the problem of placing decoys in Microsoft Active Directory (AD) network to detect potential attacks.
We propose a novel metric called response time, to measure the effectiveness of our decoy placement in temporal attack graphs.
Our goal is to maximize the defender's response time to the worst-case attack paths.
arXiv Detail & Related papers (2024-03-27T00:05:48Z) - Adversarial Deep Reinforcement Learning for Cyber Security in Software
Defined Networks [0.0]
This paper focuses on the impact of leveraging autonomous offensive approaches in Deep Reinforcement Learning (DRL) to train more robust agents.
Two algorithms, Double Deep Q-Networks (DDQN) and Neural Episodic Control to Deep Q-Network (NEC2DQN or N2D), are compared.
arXiv Detail & Related papers (2023-08-09T12:16:10Z) - IDEA: Invariant Defense for Graph Adversarial Robustness [60.0126873387533]
We propose an Invariant causal DEfense method against adversarial Attacks (IDEA)
We derive node-based and structure-based invariance objectives from an information-theoretic perspective.
Experiments demonstrate that IDEA attains state-of-the-art defense performance under all five attacks on all five datasets.
arXiv Detail & Related papers (2023-05-25T07:16:00Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Evolving Reinforcement Learning Environment to Minimize Learner's
Achievable Reward: An Application on Hardening Active Directory Systems [15.36968083280611]
We apply Evolutionary Diversity Optimization to generate diverse population of environments for training.
We demonstrate the effectiveness of our approach by focusing on a specific application, Active Directory.
arXiv Detail & Related papers (2023-04-08T12:39:40Z) - Are Defenses for Graph Neural Networks Robust? [72.1389952286628]
We show that most Graph Neural Networks (GNNs) defenses show no or only marginal improvement compared to an undefended baseline.
We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks.
Our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness.
arXiv Detail & Related papers (2023-01-31T15:11:48Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Game Theory for Adversarial Attacks and Defenses [0.0]
Adrial attacks can generate adversarial inputs by applying small but intentionally worst-case perturbations to samples from the dataset.
Some adversarial defense techniques are developed to improve the security and robustness of the models and avoid them being attacked.
arXiv Detail & Related papers (2021-10-08T07:38:33Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in
Graph-based Attack and Defense [3.3504365823045035]
Graph Neural Networks (GNNs) have received significant attention due to their state-of-the-art performance on various graph representation learning tasks.
Recent studies reveal that GNNs are vulnerable to adversarial attacks, i.e. an attacker is able to fool the GNNs by perturbing the graph structure or node features deliberately.
Most existing attacking algorithms require access to either the model parameters or the training data, which is not practical in the real world.
arXiv Detail & Related papers (2021-04-30T15:30:47Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.