Finding Effective Security Strategies through Reinforcement Learning and
Self-Play
- URL: http://arxiv.org/abs/2009.08120v2
- Date: Sun, 4 Oct 2020 16:22:54 GMT
- Title: Finding Effective Security Strategies through Reinforcement Learning and
Self-Play
- Authors: Kim Hammar and Rolf Stadler
- Abstract summary: We show that effective security strategies can emerge from self-play.
We address known challenges of reinforcement learning in this domain.
Our method is superior to two baseline methods but that policy convergence in self-play remains a challenge.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a method to automatically find security strategies for the use
case of intrusion prevention. Following this method, we model the interaction
between an attacker and a defender as a Markov game and let attack and defense
strategies evolve through reinforcement learning and self-play without human
intervention. Using a simple infrastructure configuration, we demonstrate that
effective security strategies can emerge from self-play. This shows that
self-play, which has been applied in other domains with great success, can be
effective in the context of network security. Inspection of the converged
policies show that the emerged policies reflect common-sense knowledge and are
similar to strategies of humans. Moreover, we address known challenges of
reinforcement learning in this domain and present an approach that uses
function approximation, an opponent pool, and an autoregressive policy
representation. Through evaluations we show that our method is superior to two
baseline methods but that policy convergence in self-play remains a challenge.
Related papers
- Robust Image Classification: Defensive Strategies against FGSM and PGD Adversarial Attacks [0.0]
Adversarial attacks pose significant threats to the robustness of deep learning models in image classification.
This paper explores and refines defense mechanisms against these attacks to enhance the resilience of neural networks.
arXiv Detail & Related papers (2024-08-20T02:00:02Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Learning Near-Optimal Intrusion Responses Against Dynamic Attackers [0.0]
We study automated intrusion response and formulate the interaction between an attacker and a defender as an optimal stopping game.
To obtain near-optimal defender strategies, we develop a fictitious self-play algorithm that learns Nashlibria through approximation.
We argue that this approach can produce effective defender strategies for a practical IT infrastructure.
arXiv Detail & Related papers (2023-01-11T16:36:24Z) - Learning Security Strategies through Game Play and Optimal Stopping [0.0]
We study automated intrusion prevention using reinforcement learning.
We formulate the interaction between an attacker and a defender as an optimal stopping game.
To obtain the optimal defender strategies, we introduce T-FP, a fictitious self-play algorithm.
arXiv Detail & Related papers (2022-05-29T15:30:00Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Multi-Task Federated Reinforcement Learning with Adversaries [2.6080102941802106]
Reinforcement learning algorithms pose a serious threat from adversaries.
In this paper, we analyze the Multi-task Federated Reinforcement Learning algorithms.
We propose an adaptive attack method with better attack performance.
arXiv Detail & Related papers (2021-03-11T05:39:52Z) - Robust Federated Learning with Attack-Adaptive Aggregation [45.60981228410952]
Federated learning is vulnerable to various attacks, such as model poisoning and backdoor attacks.
We propose an attack-adaptive aggregation strategy to defend against various attacks for robust learning.
arXiv Detail & Related papers (2021-02-10T04:23:23Z) - Learning Goal-oriented Dialogue Policy with Opposite Agent Awareness [116.804536884437]
We propose an opposite behavior aware framework for policy learning in goal-oriented dialogues.
We estimate the opposite agent's policy from its behavior and use this estimation to improve the target agent by regarding it as part of the target policy.
arXiv Detail & Related papers (2020-04-21T03:13:44Z) - Adversarial Augmentation Policy Search for Domain and Cross-Lingual
Generalization in Reading Comprehension [96.62963688510035]
Reading comprehension models often overfit to nuances of training datasets and fail at adversarial evaluation.
We present several effective adversaries and automated data augmentation policy search methods with the goal of making reading comprehension models more robust to adversarial evaluation.
arXiv Detail & Related papers (2020-04-13T17:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.