Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial
Attacks
- URL: http://arxiv.org/abs/2010.04205v1
- Date: Thu, 8 Oct 2020 18:36:51 GMT
- Title: Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial
Attacks
- Authors: Anit Kumar Sahu, Satya Narayan Shukla, J. Zico Kolter
- Abstract summary: We study the problem of generating adversarial examples in a black-box setting, where we only have access to a zeroth order oracle.
We use this setting to find fast one-step adversarial attacks, akin to a black-box version of the Fast Gradient Sign Method(FGSM)
We show that the method uses fewer queries and achieves higher attack success rates than the current state of the art.
- Score: 86.88061841975482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of generating adversarial examples in a black-box
setting, where we only have access to a zeroth order oracle, providing us with
loss function evaluations. Although this setting has been investigated in
previous work, most past approaches using zeroth order optimization implicitly
assume that the gradients of the loss function with respect to the input images
are \emph{unstructured}. In this work, we show that in fact substantial
correlations exist within these gradients, and we propose to capture these
correlations via a Gaussian Markov random field (GMRF). Given the
intractability of the explicit covariance structure of the MRF, we show that
the covariance structure can be efficiently represented using the Fast Fourier
Transform (FFT), along with low-rank updates to perform exact posterior
estimation under this model. We use this modeling technique to find fast
one-step adversarial attacks, akin to a black-box version of the Fast Gradient
Sign Method~(FGSM), and show that the method uses fewer queries and achieves
higher attack success rates than the current state of the art. We also
highlight the general applicability of this gradient modeling setup.
Related papers
- Covariance-Adaptive Sequential Black-box Optimization for Diffusion Targeted Generation [60.41803046775034]
We show how to perform user-preferred targeted generation via diffusion models with only black-box target scores of users.
Experiments on both numerical test problems and target-guided 3D-molecule generation tasks show the superior performance of our method in achieving better target scores.
arXiv Detail & Related papers (2024-06-02T17:26:27Z) - Robust Stochastic Optimization via Gradient Quantile Clipping [6.2844649973308835]
We introduce a quant clipping strategy for Gradient Descent (SGD)
We use gradient new outliers as norm clipping chains.
We propose an implementation of the algorithm using Huberiles.
arXiv Detail & Related papers (2023-09-29T15:24:48Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Score-based diffusion models for accelerated MRI [35.3148116010546]
We introduce a way to sample data from a conditional distribution given the measurements, such that the model can be readily used for solving inverse problems in imaging.
Our model requires magnitude images only for training, and yet is able to reconstruct complex-valued data, and even extends to parallel imaging.
arXiv Detail & Related papers (2021-10-08T08:42:03Z) - COCO Denoiser: Using Co-Coercivity for Variance Reduction in Stochastic
Convex Optimization [4.970364068620608]
We exploit convexity and L-smoothness to improve the noisy estimates outputted by the gradient oracle.
We show that increasing the number and proximity of the queried points leads to better gradient estimates.
We also apply COCO in vanilla settings by plugging it in existing algorithms, such as SGD, Adam or STRSAGA.
arXiv Detail & Related papers (2021-09-07T17:21:09Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - Ridge Regression with Frequent Directions: Statistical and Optimization
Perspectives [1.0152838128195465]
We show that acrshortfd can be used in the optimization setting through an iterative scheme which yields high-accuracy solutions.
This improves on randomized approaches which need to compromise the need for a new sketch every iteration with speed of convergence.
arXiv Detail & Related papers (2020-11-06T21:40:38Z) - Image Inpainting with Learnable Feature Imputation [8.293345261434943]
A regular convolution layer applying a filter in the same way over known and unknown areas causes visual artifacts in the inpainted image.
We propose (layer-wise) feature imputation of the missing input values to a convolution.
We present comparisons on CelebA-HQ and Places2 to current state-of-the-art to validate our model.
arXiv Detail & Related papers (2020-11-02T16:05:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.