Improving Transferability of Adversarial Examples via Bayesian Attacks
- URL: http://arxiv.org/abs/2307.11334v1
- Date: Fri, 21 Jul 2023 03:43:07 GMT
- Title: Improving Transferability of Adversarial Examples via Bayesian Attacks
- Authors: Qizhang Li, Yiwen Guo, Xiaochen Yang, Wangmeng Zuo, Hao Chen
- Abstract summary: We introduce a novel extension by incorporating the Bayesian formulation into the model input as well, enabling the joint diversification of both the model input and model parameters.
Our method achieves a new state-of-the-art on transfer-based attacks, improving the average success rate on ImageNet and CIFAR-10 by 19.14% and 2.08%, respectively.
- Score: 84.90830931076901
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a substantial extension of our work published at ICLR.
Our ICLR work advocated for enhancing transferability in adversarial examples
by incorporating a Bayesian formulation into model parameters, which
effectively emulates the ensemble of infinitely many deep neural networks,
while, in this paper, we introduce a novel extension by incorporating the
Bayesian formulation into the model input as well, enabling the joint
diversification of both the model input and model parameters. Our empirical
findings demonstrate that: 1) the combination of Bayesian formulations for both
the model input and model parameters yields significant improvements in
transferability; 2) by introducing advanced approximations of the posterior
distribution over the model input, adversarial transferability achieves further
enhancement, surpassing all state-of-the-arts when attacking without model
fine-tuning. Moreover, we propose a principled approach to fine-tune model
parameters in such an extended Bayesian formulation. The derived optimization
objective inherently encourages flat minima in the parameter space and input
space. Extensive experiments demonstrate that our method achieves a new
state-of-the-art on transfer-based attacks, improving the average success rate
on ImageNet and CIFAR-10 by 19.14% and 2.08%, respectively, when comparing with
our ICLR basic Bayesian method. We will make our code publicly available.
Related papers
- Enhancing the Transferability of Adversarial Attacks on Face Recognition with Diverse Parameters Augmentation [29.5096732465412]
Face Recognition (FR) models are vulnerable to adversarial examples that subtly manipulate benign face images.
Existing adversarial attack methods often overlook the potential benefits of augmenting the surrogate model.
We propose a novel method called Diverse Parameters Augmentation (DPA) attack method.
arXiv Detail & Related papers (2024-11-23T13:22:37Z) - Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement [0.7558576228782637]
We propose a framework for efficient Source-Free Domain Adaptation (SFDA)
Our approach introduces an improved paradigm for source-model preparation and target-side adaptation.
We demonstrate that our framework is compatible with various SFDA methods and achieves significant computational efficiency.
arXiv Detail & Related papers (2024-10-03T02:12:03Z) - SaRA: High-Efficient Diffusion Model Fine-tuning with Progressive Sparse Low-Rank Adaptation [52.6922833948127]
In this work, we investigate the importance of parameters in pre-trained diffusion models.
We propose a novel model fine-tuning method to make full use of these ineffective parameters.
Our method enhances the generative capabilities of pre-trained models in downstream applications.
arXiv Detail & Related papers (2024-09-10T16:44:47Z) - Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models [54.132297393662654]
We introduce a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL.
We demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models.
arXiv Detail & Related papers (2024-05-30T03:57:29Z) - DPPA: Pruning Method for Large Language Model to Model Merging [39.13317231533299]
We introduce a dual-stage method termed Dynamic Pruning Partition Amplification (DPPA) to tackle the challenge of merging complex fine-tuned models.
We show that our method maintains a mere 20% of domain-specific parameters and yet delivers a performance comparable to other methodologies.
Our method displays outstanding performance post-pruning, leading to a significant improvement of nearly 20% performance in model merging.
arXiv Detail & Related papers (2024-03-05T09:12:49Z) - Variance-Preserving-Based Interpolation Diffusion Models for Speech
Enhancement [53.2171981279647]
We present a framework that encapsulates both the VP- and variance-exploding (VE)-based diffusion methods.
To improve performance and ease model training, we analyze the common difficulties encountered in diffusion models.
We evaluate our model against several methods using a public benchmark to showcase the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-14T14:22:22Z) - Making Substitute Models More Bayesian Can Enhance Transferability of
Adversarial Examples [89.85593878754571]
transferability of adversarial examples across deep neural networks is the crux of many black-box attacks.
We advocate to attack a Bayesian model for achieving desirable transferability.
Our method outperforms recent state-of-the-arts by large margins.
arXiv Detail & Related papers (2023-02-10T07:08:13Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Model Selection for Bayesian Autoencoders [25.619565817793422]
We propose to optimize the distributional sliced-Wasserstein distance between the output of the autoencoder and the empirical data distribution.
We turn our BAE into a generative model by fitting a flexible Dirichlet mixture model in the latent space.
We evaluate our approach qualitatively and quantitatively using a vast experimental campaign on a number of unsupervised learning tasks and show that, in small-data regimes where priors matter, our approach provides state-of-the-art results.
arXiv Detail & Related papers (2021-06-11T08:55:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.