Decision-BADGE: Decision-based Adversarial Batch Attack with Directional
Gradient Estimation
- URL: http://arxiv.org/abs/2303.04980v2
- Date: Mon, 14 Aug 2023 08:08:50 GMT
- Title: Decision-BADGE: Decision-based Adversarial Batch Attack with Directional
Gradient Estimation
- Authors: Geunhyeok Yu, Minwoo Jeon and Hyoseok Hwang
- Abstract summary: Decision-BADGE is a novel method to craft universal adversarial perturbations for executing decision-based black-box attacks.
Our proposed method shows a superior success rate with less training time.
The research also shows that Decision-BADGE can successfully deceive unseen victim models and accurately target specific classes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The susceptibility of deep neural networks (DNNs) to adversarial examples has
prompted an increase in the deployment of adversarial attacks. Image-agnostic
universal adversarial perturbations (UAPs) are much more threatening, but many
limitations exist to implementing UAPs in real-world scenarios where only
binary decisions are returned. In this research, we propose Decision-BADGE, a
novel method to craft universal adversarial perturbations for executing
decision-based black-box attacks. To optimize perturbation with decisions, we
addressed two challenges, namely the magnitude and the direction of the
gradient. First, we use batch loss, differences from distributions of ground
truth, and accumulating decisions in batches to determine the magnitude of the
gradient. This magnitude is applied in the direction of the revised
simultaneous perturbation stochastic approximation (SPSA) to update the
perturbation. This simple yet efficient method can be easily extended to
score-based attacks as well as targeted attacks. Experimental validation across
multiple victim models demonstrates that the Decision-BADGE outperforms
existing attack methods, even image-specific and score-based attacks. In
particular, our proposed method shows a superior success rate with less
training time. The research also shows that Decision-BADGE can successfully
deceive unseen victim models and accurately target specific classes.
Related papers
- Indiscriminate Disruption of Conditional Inference on Multivariate Gaussians [60.22542847840578]
Despite advances in adversarial machine learning, inference for Gaussian models in the presence of an adversary is notably understudied.
We consider a self-interested attacker who wishes to disrupt a decisionmaker's conditional inference and subsequent actions by corrupting a set of evidentiary variables.
To avoid detection, the attacker also desires the attack to appear plausible wherein plausibility is determined by the density of the corrupted evidence.
arXiv Detail & Related papers (2024-11-21T17:46:55Z) - ADBA:Approximation Decision Boundary Approach for Black-Box Adversarial Attacks [6.253823500300899]
Black-box attacks are stealthy, generating adversarial examples using hard labels from machine learning models.
This paper introduces a novel approach using the Approximation Decision Boundary (ADB) to efficiently and accurately compare perturbation directions.
The effectiveness of our ADB approach (ADBA) hinges on promptly identifying suitable ADB, ensuring reliable differentiation of all perturbation directions.
arXiv Detail & Related papers (2024-06-07T15:09:25Z) - Model X-ray:Detecting Backdoored Models via Decision Boundary [62.675297418960355]
Backdoor attacks pose a significant security vulnerability for deep neural networks (DNNs)
We propose Model X-ray, a novel backdoor detection approach based on the analysis of illustrated two-dimensional (2D) decision boundaries.
Our approach includes two strategies focused on the decision areas dominated by clean samples and the concentration of label distribution.
arXiv Detail & Related papers (2024-02-27T12:42:07Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - Adversarial Attack Based on Prediction-Correction [8.467466998915018]
Deep neural networks (DNNs) are vulnerable to adversarial examples obtained by adding small perturbations to original examples.
In this paper, a new prediction-correction (PC) based adversarial attack is proposed.
In our proposed PC-based attack, some existing attack can be selected to produce a predicted example first, and then the predicted example and the current example are combined together to determine the added perturbations.
arXiv Detail & Related papers (2023-06-02T03:11:32Z) - Universal Distributional Decision-based Black-box Adversarial Attack
with Reinforcement Learning [5.240772699480865]
We propose a pixel-wise decision-based attack algorithm that finds a distribution of adversarial perturbation through a reinforcement learning algorithm.
Experiments show that the proposed approach outperforms state-of-the-art decision-based attacks with a higher attack success rate and greater transferability.
arXiv Detail & Related papers (2022-11-15T18:30:18Z) - Resisting Adversarial Attacks in Deep Neural Networks using Diverse
Decision Boundaries [12.312877365123267]
Deep learning systems are vulnerable to crafted adversarial examples, which may be imperceptible to the human eye, but can lead the model to misclassify.
We develop a new ensemble-based solution that constructs defender models with diverse decision boundaries with respect to the original model.
We present extensive experimentations using standard image classification datasets, namely MNIST, CIFAR-10 and CIFAR-100 against state-of-the-art adversarial attacks.
arXiv Detail & Related papers (2022-08-18T08:19:26Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes [77.66954176188426]
We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
arXiv Detail & Related papers (2021-09-15T09:13:10Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.