Learning Generative Deception Strategies in Combinatorial Masking Games
- URL: http://arxiv.org/abs/2109.11637v1
- Date: Thu, 23 Sep 2021 20:42:44 GMT
- Title: Learning Generative Deception Strategies in Combinatorial Masking Games
- Authors: Junlin Wu, Charles Kamhoua, Murat Kantarcioglu, Yevgeniy Vorobeychik
- Abstract summary: One way deception can be employed is through obscuring, or masking, some of the information about how systems are configured.
We present a novel game-theoretic model of the resulting defender-attacker interaction, where the defender chooses a subset of attributes to mask, while the attacker responds by choosing an exploit to execute.
We present a novel highly scalable approach for approximately solving such games by representing the strategies of both players as neural networks.
- Score: 27.2744631811653
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deception is a crucial tool in the cyberdefence repertoire, enabling
defenders to leverage their informational advantage to reduce the likelihood of
successful attacks. One way deception can be employed is through obscuring, or
masking, some of the information about how systems are configured, increasing
attacker's uncertainty about their targets. We present a novel game-theoretic
model of the resulting defender-attacker interaction, where the defender
chooses a subset of attributes to mask, while the attacker responds by choosing
an exploit to execute. The strategies of both players have combinatorial
structure with complex informational dependencies, and therefore even
representing these strategies is not trivial. First, we show that the problem
of computing an equilibrium of the resulting zero-sum defender-attacker game
can be represented as a linear program with a combinatorial number of system
configuration variables and constraints, and develop a constraint generation
approach for solving this problem. Next, we present a novel highly scalable
approach for approximately solving such games by representing the strategies of
both players as neural networks. The key idea is to represent the defender's
mixed strategy using a deep neural network generator, and then using
alternating gradient-descent-ascent algorithm, analogous to the training of
Generative Adversarial Networks. Our experiments, as well as a case study,
demonstrate the efficacy of the proposed approach.
Related papers
- Discriminative Adversarial Unlearning [40.30974185546541]
We introduce a novel machine unlearning framework founded upon the established principles of the min-max optimization paradigm.
We capitalize on the capabilities of strong Membership Inference Attacks (MIA) to facilitate the unlearning of specific samples from a trained model.
Our proposed algorithm closely approximates the ideal benchmark of retraining from scratch for both random sample forgetting and class-wise forgetting schemes.
arXiv Detail & Related papers (2024-02-10T03:04:57Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Adversarial Training Should Be Cast as a Non-Zero-Sum Game [121.95628660889628]
Two-player zero-sum paradigm of adversarial training has not engendered sufficient levels of robustness.
We show that the commonly used surrogate-based relaxation used in adversarial training algorithms voids all guarantees on robustness.
A novel non-zero-sum bilevel formulation of adversarial training yields a framework that matches and in some cases outperforms state-of-the-art attacks.
arXiv Detail & Related papers (2023-06-19T16:00:48Z) - Mastering Percolation-like Games with Deep Learning [0.0]
We devise a single-player game on a lattice that mimics the logic of an attacker attempting to destroy a network.
The objective of the game is to disable all nodes in the fewest number of steps.
We train agents on different definitions of robustness and compare the learned strategies.
arXiv Detail & Related papers (2023-05-12T15:37:45Z) - Graph Neural Networks for Decentralized Multi-Agent Perimeter Defense [111.9039128130633]
We develop an imitation learning framework that learns a mapping from defenders' local perceptions and their communication graph to their actions.
We run perimeter defense games in scenarios with different team sizes and configurations to demonstrate the performance of the learned network.
arXiv Detail & Related papers (2023-01-23T19:35:59Z) - Learning Decentralized Strategies for a Perimeter Defense Game with
Graph Neural Networks [111.9039128130633]
We design a graph neural network-based learning framework to learn a mapping from defenders' local perceptions and the communication graph to defenders' actions.
We demonstrate that our proposed networks stay closer to the expert policy and are superior to other baseline algorithms by capturing more intruders.
arXiv Detail & Related papers (2022-09-24T22:48:51Z) - Game Theory for Adversarial Attacks and Defenses [0.0]
Adrial attacks can generate adversarial inputs by applying small but intentionally worst-case perturbations to samples from the dataset.
Some adversarial defense techniques are developed to improve the security and robustness of the models and avoid them being attacked.
arXiv Detail & Related papers (2021-10-08T07:38:33Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Robust Federated Learning with Attack-Adaptive Aggregation [45.60981228410952]
Federated learning is vulnerable to various attacks, such as model poisoning and backdoor attacks.
We propose an attack-adaptive aggregation strategy to defend against various attacks for robust learning.
arXiv Detail & Related papers (2021-02-10T04:23:23Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.