The Power of Many: Synergistic Unification of Diverse Augmentations for Efficient Adversarial Robustness
- URL: http://arxiv.org/abs/2508.03213v1
- Date: Tue, 05 Aug 2025 08:42:14 GMT
- Title: The Power of Many: Synergistic Unification of Diverse Augmentations for Efficient Adversarial Robustness
- Authors: Wang Yu-Hang, Shiwei Li, Jianxiang Liao, Li Bohan, Jian Liu, Wenfei Yin,
- Abstract summary: Adversarial perturbations pose a significant threat to deep learning models.<n>Adversarial Training (AT) faces challenges of high computational costs and a degradation in standard performance.<n>We propose the Universal Adversarial Augmenter (UAA) framework, which is characterized by its plug-and-play nature and training efficiency.
- Score: 6.471349369877151
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial perturbations pose a significant threat to deep learning models. Adversarial Training (AT), the predominant defense method, faces challenges of high computational costs and a degradation in standard performance. While data augmentation offers an alternative path, existing techniques either yield limited robustness gains or incur substantial training overhead. Therefore, developing a defense mechanism that is both highly efficient and strongly robust is of paramount importance.In this work, we first conduct a systematic analysis of existing augmentation techniques, revealing that the synergy among diverse strategies -- rather than any single method -- is crucial for enhancing robustness. Based on this insight, we propose the Universal Adversarial Augmenter (UAA) framework, which is characterized by its plug-and-play nature and training efficiency. UAA decouples the expensive perturbation generation process from model training by pre-computing a universal transformation offline, which is then used to efficiently generate unique adversarial perturbations for each sample during training.Extensive experiments conducted on multiple benchmarks validate the effectiveness of UAA. The results demonstrate that UAA establishes a new state-of-the-art (SOTA) for data-augmentation-based adversarial defense strategies , without requiring the online generation of adversarial examples during training. This framework provides a practical and efficient pathway for building robust models,Our code is available in the supplementary materials.
Related papers
- Evolution-based Region Adversarial Prompt Learning for Robustness Enhancement in Vision-Language Models [52.8949080772873]
We propose an evolution-based region adversarial prompt tuning method called ER-APT.<n>In each training iteration, we first generate AEs using traditional gradient-based methods.<n> Subsequently, a genetic evolution mechanism incorporating selection, mutation, and crossover is applied to optimize the AEs.<n>The final evolved AEs are used for prompt tuning, achieving region-based adversarial optimization instead of conventional single-point adversarial prompt tuning.
arXiv Detail & Related papers (2025-03-17T07:08:47Z) - Improving the Transferability of Adversarial Examples by Inverse Knowledge Distillation [15.362394334872077]
Inverse Knowledge Distillation (IKD) is designed to enhance adversarial transferability effectively.<n>IKD integrates with gradient-based attack methods, promoting diversity in attack gradients and mitigating overfitting to specific model architectures.<n>Experiments on the ImageNet dataset validate the effectiveness of our approach.
arXiv Detail & Related papers (2025-02-24T09:35:30Z) - Sustainable Self-evolution Adversarial Training [51.25767996364584]
We propose a Sustainable Self-Evolution Adversarial Training (SSEAT) framework for adversarial training defense models.<n>We introduce a continual adversarial defense pipeline to realize learning from various kinds of adversarial examples.<n>We also propose an adversarial data replay module to better select more diverse and key relearning data.
arXiv Detail & Related papers (2024-12-03T08:41:11Z) - ExAL: An Exploration Enhanced Adversarial Learning Algorithm [0.0]
We propose a novel Exploration-enhanced Adversarial Learning Algorithm (ExAL)
ExAL integrates exploration-driven mechanisms to discover perturbations that maximize impact on the model's decision boundary.
We evaluate the performance of ExAL on the MNIST Handwritten Digits and Blended Malware datasets.
arXiv Detail & Related papers (2024-11-24T15:37:29Z) - MMAD-Purify: A Precision-Optimized Framework for Efficient and Scalable Multi-Modal Attacks [21.227398434694724]
We introduce an innovative framework that incorporates a precision-optimized noise predictor to enhance the effectiveness of our attack framework.
Our framework provides a cutting-edge solution for multi-modal adversarial attacks, ensuring reduced latency.
We demonstrate that our framework achieves outstanding transferability and robustness against purification defenses.
arXiv Detail & Related papers (2024-10-17T23:52:39Z) - Robust Image Classification: Defensive Strategies against FGSM and PGD Adversarial Attacks [0.0]
Adversarial attacks pose significant threats to the robustness of deep learning models in image classification.
This paper explores and refines defense mechanisms against these attacks to enhance the resilience of neural networks.
arXiv Detail & Related papers (2024-08-20T02:00:02Z) - AI Safety in Practice: Enhancing Adversarial Robustness in Multimodal Image Captioning [0.0]
Multimodal machine learning models that combine visual and textual data are increasingly being deployed in critical applications.
This paper presents an effective strategy to enhance the robustness of multimodal image captioning models against adversarial attacks.
arXiv Detail & Related papers (2024-07-30T20:28:31Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - AROID: Improving Adversarial Robustness Through Online Instance-Wise Data Augmentation [6.625868719336385]
Adversarial training (AT) is an effective defense against adversarial examples.
Data augmentation (DA) was shown to be effective in mitigating robust overfitting if appropriately designed and optimized for AT.
This work proposes a new method to automatically learn online, instance-wise, DA policies to improve robust generalization for AT.
arXiv Detail & Related papers (2023-06-12T15:54:52Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - ROPUST: Improving Robustness through Fine-tuning with Photonic
Processors and Synthetic Gradients [65.52888259961803]
We introduce ROPUST, a simple and efficient method to leverage robust pre-trained models and increase their robustness.
We test our method on nine different models against four attacks in RobustBench, consistently improving over state-of-the-art performance.
We show that even with state-of-the-art phase retrieval techniques, ROPUST remains an effective defense.
arXiv Detail & Related papers (2021-07-06T12:03:36Z) - Boosting Adversarial Training with Hypersphere Embedding [53.75693100495097]
Adversarial training is one of the most effective defenses against adversarial attacks for deep learning models.
In this work, we advocate incorporating the hypersphere embedding mechanism into the AT procedure.
We validate our methods under a wide range of adversarial attacks on the CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2020-02-20T08:42:29Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.