Towards Optimal Randomized Strategies in Adversarial Example Game
- URL: http://arxiv.org/abs/2306.16738v1
- Date: Thu, 29 Jun 2023 07:29:23 GMT
- Title: Towards Optimal Randomized Strategies in Adversarial Example Game
- Authors: Jiahao Xie, Chao Zhang, Weijie Liu, Wensong Bai, Hui Qian
- Abstract summary: The vulnerability of deep neural network models to adversarial example attacks is a practical challenge in many artificial intelligence applications.
We propose the first algorithm of its kind, called FRAT, which models the problem with a new infinite-dimensional continuous-time flow on probability distribution spaces.
We prove that the continuous-time limit of FRAT converges to a mixed Nash equilibria in a zero-sum game formed by a defender and an attacker.
- Score: 13.287949447721115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The vulnerability of deep neural network models to adversarial example
attacks is a practical challenge in many artificial intelligence applications.
A recent line of work shows that the use of randomization in adversarial
training is the key to find optimal strategies against adversarial example
attacks. However, in a fully randomized setting where both the defender and the
attacker can use randomized strategies, there are no efficient algorithm for
finding such an optimal strategy. To fill the gap, we propose the first
algorithm of its kind, called FRAT, which models the problem with a new
infinite-dimensional continuous-time flow on probability distribution spaces.
FRAT maintains a lightweight mixture of models for the defender, with
flexibility to efficiently update mixing weights and model parameters at each
iteration. Furthermore, FRAT utilizes lightweight sampling subroutines to
construct a random strategy for the attacker. We prove that the continuous-time
limit of FRAT converges to a mixed Nash equilibria in a zero-sum game formed by
a defender and an attacker. Experimental results also demonstrate the
efficiency of FRAT on CIFAR-10 and CIFAR-100 datasets.
Related papers
- Discriminative Adversarial Unlearning [40.30974185546541]
We introduce a novel machine unlearning framework founded upon the established principles of the min-max optimization paradigm.
We capitalize on the capabilities of strong Membership Inference Attacks (MIA) to facilitate the unlearning of specific samples from a trained model.
Our proposed algorithm closely approximates the ideal benchmark of retraining from scratch for both random sample forgetting and class-wise forgetting schemes.
arXiv Detail & Related papers (2024-02-10T03:04:57Z) - Wasserstein distributional robustness of neural networks [9.79503506460041]
Deep neural networks are known to be vulnerable to adversarial attacks (AA)
For an image recognition task, this means that a small perturbation of the original can result in the image being misclassified.
We re-cast the problem using techniques of Wasserstein distributionally robust optimization (DRO) and obtain novel contributions.
arXiv Detail & Related papers (2023-06-16T13:41:24Z) - A Multi-objective Memetic Algorithm for Auto Adversarial Attack
Optimization Design [1.9100854225243937]
Well-designed adversarial defense strategies can improve the robustness of deep learning models against adversarial examples.
Given the defensed model, the efficient adversarial attack with less computational burden and lower robust accuracy is needed to be further exploited.
We propose a multi-objective memetic algorithm for auto adversarial attack optimization design, which realizes the automatical search for the near-optimal adversarial attack towards defensed models.
arXiv Detail & Related papers (2022-08-15T03:03:05Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Mixed Strategies for Security Games with General Defending Requirements [37.02840909260615]
The Stackelberg security game is played between a defender and an attacker, where the defender needs to allocate a limited amount of resources to multiple targets.
We propose an efficient close-to-optimal Patching algorithm that computes mixed strategies that use only few pure strategies.
arXiv Detail & Related papers (2022-04-26T08:56:39Z) - Interpolated Joint Space Adversarial Training for Robust and
Generalizable Defenses [82.3052187788609]
Adversarial training (AT) is considered to be one of the most reliable defenses against adversarial attacks.
Recent works show generalization improvement with adversarial samples under novel threat models.
We propose a novel threat model called Joint Space Threat Model (JSTM)
Under JSTM, we develop novel adversarial attacks and defenses.
arXiv Detail & Related papers (2021-12-12T21:08:14Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Dynamic Defense Approach for Adversarial Robustness in Deep Neural
Networks via Stochastic Ensemble Smoothed Model [12.858728363390703]
This paper builds upon ensemble smoothing based on defense method of random smoothing and model ensemble.
It handles the extreme transferability and vulnerability of ensemble models under white-box attacks.
arXiv Detail & Related papers (2021-05-06T16:48:52Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.