Boosting Decision-Based Black-Box Adversarial Attack with Gradient
Priors
- URL: http://arxiv.org/abs/2310.19038v1
- Date: Sun, 29 Oct 2023 15:05:39 GMT
- Title: Boosting Decision-Based Black-Box Adversarial Attack with Gradient
Priors
- Authors: Han Liu, Xingshuo Huang, Xiaotong Zhang, Qimai Li, Fenglong Ma, Wei
Wang, Hongyang Chen, Hong Yu, Xianchao Zhang
- Abstract summary: We propose a novel Decision-based Black-box Attack framework with Gradient Priors (DBA-GP)
DBA-GP seamlessly integrates the data-dependent gradient prior and time-dependent prior into the gradient estimation procedure.
Extensive experiments have demonstrated that the proposed method outperforms other strong baselines significantly.
- Score: 37.987522238329554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decision-based methods have shown to be effective in black-box adversarial
attacks, as they can obtain satisfactory performance and only require to access
the final model prediction. Gradient estimation is a critical step in black-box
adversarial attacks, as it will directly affect the query efficiency. Recent
works have attempted to utilize gradient priors to facilitate score-based
methods to obtain better results. However, these gradient priors still suffer
from the edge gradient discrepancy issue and the successive iteration gradient
direction issue, thus are difficult to simply extend to decision-based methods.
In this paper, we propose a novel Decision-based Black-box Attack framework
with Gradient Priors (DBA-GP), which seamlessly integrates the data-dependent
gradient prior and time-dependent prior into the gradient estimation procedure.
First, by leveraging the joint bilateral filter to deal with each random
perturbation, DBA-GP can guarantee that the generated perturbations in edge
locations are hardly smoothed, i.e., alleviating the edge gradient discrepancy,
thus remaining the characteristics of the original image as much as possible.
Second, by utilizing a new gradient updating strategy to automatically adjust
the successive iteration gradient direction, DBA-GP can accelerate the
convergence speed, thus improving the query efficiency. Extensive experiments
have demonstrated that the proposed method outperforms other strong baselines
significantly.
Related papers
- Rethinking PGD Attack: Is Sign Function Necessary? [131.6894310945647]
We present a theoretical analysis of how such sign-based update algorithm influences step-wise attack performance.
We propose a new raw gradient descent (RGD) algorithm that eliminates the use of sign.
The effectiveness of the proposed RGD algorithm has been demonstrated extensively in experiments.
arXiv Detail & Related papers (2023-12-03T02:26:58Z) - Sampling-based Fast Gradient Rescaling Method for Highly Transferable
Adversarial Attacks [18.05924632169541]
We propose a Sampling-based Fast Gradient Rescaling Method (S-FGRM)
Specifically, we use data rescaling to substitute the sign function without extra computational cost.
Our method could significantly boost the transferability of gradient-based attacks and outperform the state-of-the-art baselines.
arXiv Detail & Related papers (2023-07-06T07:52:42Z) - Sampling-based Fast Gradient Rescaling Method for Highly Transferable
Adversarial Attacks [19.917677500613788]
gradient-based approaches generally use the $sign$ function to generate perturbations at the end of the process.
We propose a Sampling-based Fast Gradient Rescaling Method (S-FGRM) to improve the transferability of crafted adversarial examples.
arXiv Detail & Related papers (2022-04-06T15:12:20Z) - Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based
Prior [50.393092185611536]
We consider the black-box adversarial setting, where the adversary needs to craft adversarial examples without access to the gradients of a target model.
Previous methods attempted to approximate the true gradient either by using the transfer gradient of a surrogate white-box model or based on the feedback of model queries.
We propose two prior-guided random gradient-free (PRGF) algorithms based on biased sampling and gradient averaging.
arXiv Detail & Related papers (2022-03-13T04:06:27Z) - Point Cloud Denoising via Momentum Ascent in Gradient Fields [72.93429911044903]
gradient-based method was proposed to estimate the gradient fields from the noisy point clouds using neural networks.
We develop a momentum gradient ascent method that leverages the information of previous iterations in determining the trajectories of the points.
Experiments demonstrate that the proposed method outperforms state-of-the-art approaches with a variety of point clouds, noise types, and noise levels.
arXiv Detail & Related papers (2022-02-21T10:21:40Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Enhancing the Transferability of Adversarial Attacks through Variance
Tuning [6.5328074334512]
We propose a new method called variance tuning to enhance the class of iterative gradient based attack methods.
Empirical results on the standard ImageNet dataset demonstrate that our method could significantly improve the transferability of gradient-based adversarial attacks.
arXiv Detail & Related papers (2021-03-29T12:41:55Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - SSGD: A safe and efficient method of gradient descent [0.5099811144731619]
gradient descent method plays an important role in solving various optimization problems.
Super gradient descent approach to update parameters by concealing the length of gradient.
Our algorithm can defend against attacks on the gradient.
arXiv Detail & Related papers (2020-12-03T17:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.