Co-Evolutionary Defence of Active Directory Attack Graphs via GNN-Approximated Dynamic Programming
- URL: http://arxiv.org/abs/2505.11710v1
- Date: Fri, 16 May 2025 21:37:50 GMT
- Title: Co-Evolutionary Defence of Active Directory Attack Graphs via GNN-Approximated Dynamic Programming
- Authors: Diksha Goel, Hussain Ahmad, Kristen Moore, Mingyu Guo,
- Abstract summary: We model attacker-defender interactions in Active Directory as a Stackelberg game between an adaptive attacker and a proactive defender.<n>We propose a co-evolutionary defense framework that combines Graph Neural Network Approximated Dynamic Programming (GNNDP) to model attacker strategies.<n>Our framework refines attacker and defender policies to improve generalization and prevent premature convergence.
- Score: 3.8352746394815815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern enterprise networks increasingly rely on Active Directory (AD) for identity and access management. However, this centralization exposes a single point of failure, allowing adversaries to compromise high-value assets. Existing AD defense approaches often assume static attacker behavior, but real-world adversaries adapt dynamically, rendering such methods brittle. To address this, we model attacker-defender interactions in AD as a Stackelberg game between an adaptive attacker and a proactive defender. We propose a co-evolutionary defense framework that combines Graph Neural Network Approximated Dynamic Programming (GNNDP) to model attacker strategies, with Evolutionary Diversity Optimization (EDO) to generate resilient blocking strategies. To ensure scalability, we introduce a Fixed-Parameter Tractable (FPT) graph reduction method that reduces complexity while preserving strategic structure. Our framework jointly refines attacker and defender policies to improve generalization and prevent premature convergence. Experiments on synthetic AD graphs show near-optimal results (within 0.1 percent of optimality on r500) and improved performance on larger graphs (r1000 and r2000), demonstrating the framework's scalability and effectiveness.
Related papers
- CyGATE: Game-Theoretic Cyber Attack-Defense Engine for Patch Strategy Optimization [73.13843039509386]
This paper presents CyGATE, a game-theoretic framework modeling attacker-defender interactions.<n>CyGATE frames cyber conflicts as a partially observable game (POSG) across Cyber Kill Chain stages.<n>The framework's flexible architecture enables extension to multi-agent scenarios.
arXiv Detail & Related papers (2025-08-01T09:53:06Z) - EvA: Evolutionary Attacks on Graphs [50.13398588415462]
Even a slight robustness in the graph structure can cause a significant drop in the accuracy of graph neural networks (GNNs)<n>We introduce a few simple yet effective enhancements of an evolutionary-based algorithm to solve the discrete optimization problem directly.<n>Among our experiments, EvA shows $sim$11% additional drop in accuracy on average compared to the best previous attack.
arXiv Detail & Related papers (2025-07-10T22:50:58Z) - Adversarial Training in Low-Label Regimes with Margin-Based Interpolation [8.585017175426023]
Adversarial training has emerged as an effective approach to train robust neural network models that are resistant to adversarial attacks.<n>In this paper, we introduce a novel semi-supervised adversarial training approach that enhances both robustness and natural accuracy.
arXiv Detail & Related papers (2024-11-27T00:35:13Z) - Optimizing Cyber Defense in Dynamic Active Directories through Reinforcement Learning [10.601458163651582]
This paper addresses the absence of effective edge-blocking ACO strategies in dynamic, real-world networks.
It specifically targets the cybersecurity vulnerabilities of organizational Active Directory (AD) systems.
Unlike the existing literature on edge-blocking defenses which considers AD systems as static entities, our study counters this by recognizing their dynamic nature.
arXiv Detail & Related papers (2024-06-28T01:37:46Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - A Multi-objective Memetic Algorithm for Auto Adversarial Attack
Optimization Design [1.9100854225243937]
Well-designed adversarial defense strategies can improve the robustness of deep learning models against adversarial examples.
Given the defensed model, the efficient adversarial attack with less computational burden and lower robust accuracy is needed to be further exploited.
We propose a multi-objective memetic algorithm for auto adversarial attack optimization design, which realizes the automatical search for the near-optimal adversarial attack towards defensed models.
arXiv Detail & Related papers (2022-08-15T03:03:05Z) - LAS-AT: Adversarial Training with Learnable Attack Strategy [82.88724890186094]
"Learnable attack strategy", dubbed LAS-AT, learns to automatically produce attack strategies to improve the model robustness.
Our framework is composed of a target network that uses AEs for training to improve robustness and a strategy network that produces attack strategies to control the AE generation.
arXiv Detail & Related papers (2022-03-13T10:21:26Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Dynamic Defense Approach for Adversarial Robustness in Deep Neural
Networks via Stochastic Ensemble Smoothed Model [12.858728363390703]
This paper builds upon ensemble smoothing based on defense method of random smoothing and model ensemble.
It handles the extreme transferability and vulnerability of ensemble models under white-box attacks.
arXiv Detail & Related papers (2021-05-06T16:48:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.