Game Theory for Adversarial Attacks and Defenses
- URL: http://arxiv.org/abs/2110.06166v4
- Date: Fri, 5 Jan 2024 09:09:12 GMT
- Title: Game Theory for Adversarial Attacks and Defenses
- Authors: Shorya Sharma
- Abstract summary: Adrial attacks can generate adversarial inputs by applying small but intentionally worst-case perturbations to samples from the dataset.
Some adversarial defense techniques are developed to improve the security and robustness of the models and avoid them being attacked.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial attacks can generate adversarial inputs by applying small but
intentionally worst-case perturbations to samples from the dataset, which leads
to even state-of-the-art deep neural networks outputting incorrect answers with
high confidence. Hence, some adversarial defense techniques are developed to
improve the security and robustness of the models and avoid them being
attacked. Gradually, a game-like competition between attackers and defenders
formed, in which both players would attempt to play their best strategies
against each other while maximizing their own payoffs. To solve the game, each
player would choose an optimal strategy against the opponent based on the
prediction of the opponent's strategy choice. In this work, we are on the
defensive side to apply game-theoretic approaches on defending against attacks.
We use two randomization methods, random initialization and stochastic
activation pruning, to create diversity of networks. Furthermore, we use one
denoising technique, super resolution, to improve models' robustness by
preprocessing images before attacks. Our experimental results indicate that
those three methods can effectively improve the robustness of deep-learning
neural networks.
Related papers
- Improving Adversarial Robustness via Decoupled Visual Representation Masking [65.73203518658224]
In this paper, we highlight two novel properties of robust features from the feature distribution perspective.
We find that state-of-the-art defense methods aim to address both of these mentioned issues well.
Specifically, we propose a simple but effective defense based on decoupled visual representation masking.
arXiv Detail & Related papers (2024-06-16T13:29:41Z) - MPAT: Building Robust Deep Neural Networks against Textual Adversarial
Attacks [4.208423642716679]
We propose a malicious perturbation based adversarial training method (MPAT) for building robust deep neural networks against adversarial attacks.
Specifically, we construct a multi-level malicious example generation strategy to generate adversarial examples with malicious perturbations.
We employ a novel training objective function to ensure achieving the defense goal without compromising the performance on the original task.
arXiv Detail & Related papers (2024-02-29T01:49:18Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Adversarial Machine Learning and Defense Game for NextG Signal
Classification with Deep Learning [1.1726528038065764]
NextG systems can employ deep neural networks (DNNs) for various tasks such as user equipment identification, physical layer authentication, and detection of incumbent users.
This paper presents a game-theoretic framework to study the interactions of attack and defense for deep learning-based NextG signal classification.
arXiv Detail & Related papers (2022-12-22T15:13:03Z) - Learning Generative Deception Strategies in Combinatorial Masking Games [27.2744631811653]
One way deception can be employed is through obscuring, or masking, some of the information about how systems are configured.
We present a novel game-theoretic model of the resulting defender-attacker interaction, where the defender chooses a subset of attributes to mask, while the attacker responds by choosing an exploit to execute.
We present a novel highly scalable approach for approximately solving such games by representing the strategies of both players as neural networks.
arXiv Detail & Related papers (2021-09-23T20:42:44Z) - Adversarial Attack and Defense in Deep Ranking [100.17641539999055]
We propose two attacks against deep ranking systems that can raise or lower the rank of chosen candidates by adversarial perturbations.
Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks.
Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online Products datasets.
arXiv Detail & Related papers (2021-06-07T13:41:45Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Adversarial Feature Desensitization [12.401175943131268]
We propose a novel approach to adversarial robustness, which builds upon the insights from the domain adaptation field.
Our method, called Adversarial Feature Desensitization (AFD), aims at learning features that are invariant towards adversarial perturbations of the inputs.
arXiv Detail & Related papers (2020-06-08T14:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.