GADT: Enhancing Transferable Adversarial Attacks through Gradient-guided Adversarial Data Transformation
- URL: http://arxiv.org/abs/2410.18648v1
- Date: Thu, 24 Oct 2024 11:21:49 GMT
- Title: GADT: Enhancing Transferable Adversarial Attacks through Gradient-guided Adversarial Data Transformation
- Authors: Yating Ma, Xiaogang Xu, Liming Fang, Zhe Liu,
- Abstract summary: We propose a novel DA-based attack algorithm, GADT, which identifies suitable DA parameters through iterative antagonism and uses posterior estimates to update AN based on these parameters.
GADT can be utilized in other black-box attack scenarios, e.g., query-based attacks, offering a new avenue to enhance attacks on real-world AI applications.
- Score: 17.391427373945803
- License:
- Abstract: Current Transferable Adversarial Examples (TAE) are primarily generated by adding Adversarial Noise (AN). Recent studies emphasize the importance of optimizing Data Augmentation (DA) parameters along with AN, which poses a greater threat to real-world AI applications. However, existing DA-based strategies often struggle to find optimal solutions due to the challenging DA search procedure without proper guidance. In this work, we propose a novel DA-based attack algorithm, GADT. GADT identifies suitable DA parameters through iterative antagonism and uses posterior estimates to update AN based on these parameters. We uniquely employ a differentiable DA operation library to identify adversarial DA parameters and introduce a new loss function as a metric during DA optimization. This loss term enhances adversarial effects while preserving the original image content, maintaining attack crypticity. Extensive experiments on public datasets with various networks demonstrate that GADT can be integrated with existing transferable attack methods, updating their DA parameters effectively while retaining their AN formulation strategies. Furthermore, GADT can be utilized in other black-box attack scenarios, e.g., query-based attacks, offering a new avenue to enhance attacks on real-world AI applications in both research and industrial contexts.
Related papers
- Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - GE-AdvGAN: Improving the transferability of adversarial samples by
gradient editing-based adversarial generative model [69.71629949747884]
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data.
In this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples.
arXiv Detail & Related papers (2024-01-11T16:43:16Z) - Learning Better with Less: Effective Augmentation for Sample-Efficient
Visual Reinforcement Learning [57.83232242068982]
Data augmentation (DA) is a crucial technique for enhancing the sample efficiency of visual reinforcement learning (RL) algorithms.
It remains unclear which attributes of DA account for its effectiveness in achieving sample-efficient visual RL.
This work conducts comprehensive experiments to assess the impact of DA's attributes on its efficacy.
arXiv Detail & Related papers (2023-05-25T15:46:20Z) - Adversarial and Random Transformations for Robust Domain Adaptation and
Generalization [9.995765847080596]
We show that by simply applying consistency training with random data augmentation, state-of-the-art results on domain adaptation (DA) and generalization (DG) can be obtained.
The combined adversarial and random transformations based method outperforms the state-of-the-art on multiple DA and DG benchmark datasets.
arXiv Detail & Related papers (2022-11-13T02:10:13Z) - Universal Adaptive Data Augmentation [30.83891617679216]
"Universal Adaptive Data Augmentation" (UADA) is a novel data augmentation strategy.
We randomly decide types and magnitudes of DA operations for every data batch during training.
UADA adaptively update DA's parameters according to the target model's gradient information.
arXiv Detail & Related papers (2022-07-14T05:05:43Z) - Revisiting Deep Subspace Alignment for Unsupervised Domain Adaptation [42.16718847243166]
Unsupervised domain adaptation (UDA) aims to transfer and adapt knowledge from a labeled source domain to an unlabeled target domain.
Traditionally, subspace-based methods form an important class of solutions to this problem.
This paper revisits the use of subspace alignment for UDA and proposes a novel adaptation algorithm that consistently leads to improved generalization.
arXiv Detail & Related papers (2022-01-05T20:16:38Z) - Dynamic Domain Adaptation for Efficient Inference [12.713628738434881]
Domain adaptation (DA) enables knowledge transfer from a labeled source domain to an unlabeled target domain.
Most prior DA approaches leverage complicated and powerful deep neural networks to improve the adaptation capacity.
We propose a dynamic domain adaptation (DDA) framework, which can simultaneously achieve efficient target inference in low-resource scenarios.
arXiv Detail & Related papers (2021-03-26T08:53:16Z) - Unsupervised and self-adaptative techniques for cross-domain person
re-identification [82.54691433502335]
Person Re-Identification (ReID) across non-overlapping cameras is a challenging task.
Unsupervised Domain Adaptation (UDA) is a promising alternative, as it performs feature-learning adaptation from a model trained on a source to a target domain without identity-label annotation.
In this paper, we propose a novel UDA-based ReID method that takes advantage of triplets of samples created by a new offline strategy.
arXiv Detail & Related papers (2021-03-21T23:58:39Z) - Boosting Black-Box Attack with Partially Transferred Conditional
Adversarial Distribution [83.02632136860976]
We study black-box adversarial attacks against deep neural networks (DNNs)
We develop a novel mechanism of adversarial transferability, which is robust to the surrogate biases.
Experiments on benchmark datasets and attacking against real-world API demonstrate the superior attack performance of the proposed method.
arXiv Detail & Related papers (2020-06-15T16:45:27Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.