Mixed Strategies for Security Games with General Defending Requirements
- URL: http://arxiv.org/abs/2204.12158v1
- Date: Tue, 26 Apr 2022 08:56:39 GMT
- Title: Mixed Strategies for Security Games with General Defending Requirements
- Authors: Rufan Bai, Haoxing Lin, Xinyu Yang, Xiaowei Wu, Minming Li, Weijia Jia
- Abstract summary: The Stackelberg security game is played between a defender and an attacker, where the defender needs to allocate a limited amount of resources to multiple targets.
We propose an efficient close-to-optimal Patching algorithm that computes mixed strategies that use only few pure strategies.
- Score: 37.02840909260615
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The Stackelberg security game is played between a defender and an attacker,
where the defender needs to allocate a limited amount of resources to multiple
targets in order to minimize the loss due to adversarial attack by the
attacker. While allowing targets to have different values, classic settings
often assume uniform requirements to defend the targets. This enables existing
results that study mixed strategies (randomized allocation algorithms) to adopt
a compact representation of the mixed strategies.
In this work, we initiate the study of mixed strategies for the security
games in which the targets can have different defending requirements. In
contrast to the case of uniform defending requirement, for which an optimal
mixed strategy can be computed efficiently, we show that computing the optimal
mixed strategy is NP-hard for the general defending requirements setting.
However, we show that strong upper and lower bounds for the optimal mixed
strategy defending result can be derived. We propose an efficient
close-to-optimal Patching algorithm that computes mixed strategies that use
only few pure strategies. We also study the setting when the game is played on
a network and resource sharing is enabled between neighboring targets. Our
experimental results demonstrate the effectiveness of our algorithm in several
large real-world datasets.
Related papers
- Unbalanced Optimal Transport: A Unified Framework for Object Detection [97.74382560746987]
We show how Unbalanced Optimal Transport unifies different approaches to object detection.
We show that training an object detection model with Unbalanced Optimal Transport is able to reach the state-of-the-art.
The approach is well suited for GPU implementation, which proves to be an advantage for large-scale models.
arXiv Detail & Related papers (2023-07-05T16:21:52Z) - Towards Optimal Randomized Strategies in Adversarial Example Game [13.287949447721115]
The vulnerability of deep neural network models to adversarial example attacks is a practical challenge in many artificial intelligence applications.
We propose the first algorithm of its kind, called FRAT, which models the problem with a new infinite-dimensional continuous-time flow on probability distribution spaces.
We prove that the continuous-time limit of FRAT converges to a mixed Nash equilibria in a zero-sum game formed by a defender and an attacker.
arXiv Detail & Related papers (2023-06-29T07:29:23Z) - Sampling-based Reactive Synthesis for Nondeterministic Hybrid Systems [20.0212772540119]
This paper introduces a sampling-based strategy synthesis algorithm for nondeterministic hybrid systems.
We model the evolution of the hybrid system as a two-player game, where the nondeterminism is an adversarial player.
The aim is to synthesize a winning strategy -- a reactive (robust) strategy that guarantees the satisfaction of the goals under all possible moves of the adversarial player.
arXiv Detail & Related papers (2023-04-14T00:45:16Z) - Learning Near-Optimal Intrusion Responses Against Dynamic Attackers [0.0]
We study automated intrusion response and formulate the interaction between an attacker and a defender as an optimal stopping game.
To obtain near-optimal defender strategies, we develop a fictitious self-play algorithm that learns Nashlibria through approximation.
We argue that this approach can produce effective defender strategies for a practical IT infrastructure.
arXiv Detail & Related papers (2023-01-11T16:36:24Z) - Learning Security Strategies through Game Play and Optimal Stopping [0.0]
We study automated intrusion prevention using reinforcement learning.
We formulate the interaction between an attacker and a defender as an optimal stopping game.
To obtain the optimal defender strategies, we introduce T-FP, a fictitious self-play algorithm.
arXiv Detail & Related papers (2022-05-29T15:30:00Z) - LAS-AT: Adversarial Training with Learnable Attack Strategy [82.88724890186094]
"Learnable attack strategy", dubbed LAS-AT, learns to automatically produce attack strategies to improve the model robustness.
Our framework is composed of a target network that uses AEs for training to improve robustness and a strategy network that produces attack strategies to control the AE generation.
arXiv Detail & Related papers (2022-03-13T10:21:26Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - Provably Efficient Algorithms for Multi-Objective Competitive RL [54.22598924633369]
We study multi-objective reinforcement learning (RL) where an agent's reward is represented as a vector.
In settings where an agent competes against opponents, its performance is measured by the distance of its average return vector to a target set.
We develop statistically and computationally efficient algorithms to approach the associated target set.
arXiv Detail & Related papers (2021-02-05T14:26:00Z) - On the Impossibility of Convergence of Mixed Strategies with No Regret
Learning [10.515544361834241]
We study convergence properties of the mixed strategies that result from a general class of optimal no regret learning strategies.
We consider the class of strategies whose information set at each step is the empirical average of the opponent's realized play.
arXiv Detail & Related papers (2020-12-03T18:02:40Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.