Adversarial Feature Alignment: Balancing Robustness and Accuracy in Deep
Learning via Adversarial Training
- URL: http://arxiv.org/abs/2402.12187v1
- Date: Mon, 19 Feb 2024 14:51:20 GMT
- Title: Adversarial Feature Alignment: Balancing Robustness and Accuracy in Deep
Learning via Adversarial Training
- Authors: Leo Hyun Park, Jaeuk Kim, Myung Gyo Oh, Jaewoo Park, Taekyoung Kwon
- Abstract summary: Adversarial training is used to mitigate this problem by increasing robustness against adversarial attacks.
This approach typically reduces a model's standard accuracy on clean, non-adversarial samples.
This paper proposes a novel adversarial training method called Adversarial Feature Alignment (AFA) to address these problems.
- Score: 10.099179580467737
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models continue to advance in accuracy, yet they remain
vulnerable to adversarial attacks, which often lead to the misclassification of
adversarial examples. Adversarial training is used to mitigate this problem by
increasing robustness against these attacks. However, this approach typically
reduces a model's standard accuracy on clean, non-adversarial samples. The
necessity for deep learning models to balance both robustness and accuracy for
security is obvious, but achieving this balance remains challenging, and the
underlying reasons are yet to be clarified. This paper proposes a novel
adversarial training method called Adversarial Feature Alignment (AFA), to
address these problems. Our research unveils an intriguing insight:
misalignment within the feature space often leads to misclassification,
regardless of whether the samples are benign or adversarial. AFA mitigates this
risk by employing a novel optimization algorithm based on contrastive learning
to alleviate potential feature misalignment. Through our evaluations, we
demonstrate the superior performance of AFA. The baseline AFA delivers higher
robust accuracy than previous adversarial contrastive learning methods while
minimizing the drop in clean accuracy to 1.86% and 8.91% on CIFAR10 and
CIFAR100, respectively, in comparison to cross-entropy. We also show that joint
optimization of AFA and TRADES, accompanied by data augmentation using a recent
diffusion model, achieves state-of-the-art accuracy and robustness.
Related papers
- New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes [11.694880978089852]
Adversarial Training (AT) is one of the most effective methods to enhance the robustness of DNNs.
Existing AT methods suffer from an inherent trade-off between adversarial robustness and clean accuracy.
We propose a new AT paradigm by introducing an additional dummy class for each original class.
arXiv Detail & Related papers (2024-10-16T15:36:10Z) - FACTUAL: A Novel Framework for Contrastive Learning Based Robust SAR Image Classification [10.911464455072391]
FACTUAL is a Contrastive Learning framework for Adversarial Training and robust SAR classification.
Our model achieves 99.7% accuracy on clean samples, and 89.6% on perturbed samples, both outperforming previous state-of-the-art methods.
arXiv Detail & Related papers (2024-04-04T06:20:22Z) - The Effectiveness of Random Forgetting for Robust Generalization [21.163070161951868]
We introduce a novel learning paradigm called "Forget to Mitigate Overfitting" (FOMO)
FOMO alternates between the forgetting phase, which randomly forgets a subset of weights, and the relearning phase, which emphasizes learning generalizable features.
Our experiments show that FOMO alleviates robust overfitting by significantly reducing the gap between the best and last robust test accuracy.
arXiv Detail & Related papers (2024-02-18T23:14:40Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - Interpolated Joint Space Adversarial Training for Robust and
Generalizable Defenses [82.3052187788609]
Adversarial training (AT) is considered to be one of the most reliable defenses against adversarial attacks.
Recent works show generalization improvement with adversarial samples under novel threat models.
We propose a novel threat model called Joint Space Threat Model (JSTM)
Under JSTM, we develop novel adversarial attacks and defenses.
arXiv Detail & Related papers (2021-12-12T21:08:14Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Adversarial Feature Stacking for Accurate and Robust Predictions [4.208059346198116]
Adversarial Feature Stacking (AFS) model can jointly take advantage of features with varied levels of robustness and accuracy.
We evaluate the AFS model on CIFAR-10 and CIFAR-100 datasets with strong adaptive attack methods.
arXiv Detail & Related papers (2021-03-24T12:01:24Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.