Learning Near-Optimal Intrusion Responses Against Dynamic Attackers
- URL: http://arxiv.org/abs/2301.06085v2
- Date: Fri, 14 Apr 2023 10:44:29 GMT
- Title: Learning Near-Optimal Intrusion Responses Against Dynamic Attackers
- Authors: Kim Hammar and Rolf Stadler
- Abstract summary: We study automated intrusion response and formulate the interaction between an attacker and a defender as an optimal stopping game.
To obtain near-optimal defender strategies, we develop a fictitious self-play algorithm that learns Nashlibria through approximation.
We argue that this approach can produce effective defender strategies for a practical IT infrastructure.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We study automated intrusion response and formulate the interaction between
an attacker and a defender as an optimal stopping game where attack and defense
strategies evolve through reinforcement learning and self-play. The
game-theoretic modeling enables us to find defender strategies that are
effective against a dynamic attacker, i.e. an attacker that adapts its strategy
in response to the defender strategy. Further, the optimal stopping formulation
allows us to prove that optimal strategies have threshold properties. To obtain
near-optimal defender strategies, we develop Threshold Fictitious Self-Play
(T-FP), a fictitious self-play algorithm that learns Nash equilibria through
stochastic approximation. We show that T-FP outperforms a state-of-the-art
algorithm for our use case. The experimental part of this investigation
includes two systems: a simulation system where defender strategies are
incrementally learned and an emulation system where statistics are collected
that drive simulation runs and where learned strategies are evaluated. We argue
that this approach can produce effective defender strategies for a practical IT
infrastructure.
Related papers
- Optimizing Cyber Defense in Dynamic Active Directories through Reinforcement Learning [10.601458163651582]
This paper addresses the absence of effective edge-blocking ACO strategies in dynamic, real-world networks.
It specifically targets the cybersecurity vulnerabilities of organizational Active Directory (AD) systems.
Unlike the existing literature on edge-blocking defenses which considers AD systems as static entities, our study counters this by recognizing their dynamic nature.
arXiv Detail & Related papers (2024-06-28T01:37:46Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Optimal Attack and Defense for Reinforcement Learning [11.36770403327493]
In adversarial RL, an external attacker has the power to manipulate the victim agent's interaction with the environment.
We show the attacker's problem of designing a stealthy attack that maximizes its own expected reward.
We argue that the optimal defense policy for the victim can be computed as the solution to a Stackelberg game.
arXiv Detail & Related papers (2023-11-30T21:21:47Z) - Scalable Learning of Intrusion Responses through Recursive Decomposition [0.0]
We study automated intrusion response for an IT infrastructure and the interaction between an attacker and a defender as a partially observed game.
To solve the game we follow an approach where attack and defense strategies co-evolve through reinforcement learning and self-play toward an equilibrium.
We introduce an algorithm called Decompositional Fictitious Self-Play (DFSP), which learns equilibria through approximation.
arXiv Detail & Related papers (2023-09-06T18:12:07Z) - A Multi-objective Memetic Algorithm for Auto Adversarial Attack
Optimization Design [1.9100854225243937]
Well-designed adversarial defense strategies can improve the robustness of deep learning models against adversarial examples.
Given the defensed model, the efficient adversarial attack with less computational burden and lower robust accuracy is needed to be further exploited.
We propose a multi-objective memetic algorithm for auto adversarial attack optimization design, which realizes the automatical search for the near-optimal adversarial attack towards defensed models.
arXiv Detail & Related papers (2022-08-15T03:03:05Z) - Learning Security Strategies through Game Play and Optimal Stopping [0.0]
We study automated intrusion prevention using reinforcement learning.
We formulate the interaction between an attacker and a defender as an optimal stopping game.
To obtain the optimal defender strategies, we introduce T-FP, a fictitious self-play algorithm.
arXiv Detail & Related papers (2022-05-29T15:30:00Z) - LAS-AT: Adversarial Training with Learnable Attack Strategy [82.88724890186094]
"Learnable attack strategy", dubbed LAS-AT, learns to automatically produce attack strategies to improve the model robustness.
Our framework is composed of a target network that uses AEs for training to improve robustness and a strategy network that produces attack strategies to control the AE generation.
arXiv Detail & Related papers (2022-03-13T10:21:26Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Robust Federated Learning with Attack-Adaptive Aggregation [45.60981228410952]
Federated learning is vulnerable to various attacks, such as model poisoning and backdoor attacks.
We propose an attack-adaptive aggregation strategy to defend against various attacks for robust learning.
arXiv Detail & Related papers (2021-02-10T04:23:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.