CAT: Customized Adversarial Training for Improved Robustness
- URL: http://arxiv.org/abs/2002.06789v1
- Date: Mon, 17 Feb 2020 06:13:05 GMT
- Title: CAT: Customized Adversarial Training for Improved Robustness
- Authors: Minhao Cheng, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, Cho-Jui Hsieh
- Abstract summary: We propose a new algorithm, named Customized Adversarial Training (CAT), which adaptively customizes the perturbation level and the corresponding label for each training sample in adversarial training.
We show that the proposed algorithm achieves better clean and robust accuracy than previous adversarial training methods through extensive experiments.
- Score: 142.3480998034692
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training has become one of the most effective methods for
improving robustness of neural networks. However, it often suffers from poor
generalization on both clean and perturbed data. In this paper, we propose a
new algorithm, named Customized Adversarial Training (CAT), which adaptively
customizes the perturbation level and the corresponding label for each training
sample in adversarial training. We show that the proposed algorithm achieves
better clean and robust accuracy than previous adversarial training methods
through extensive experiments.
Related papers
- Criticality Leveraged Adversarial Training (CLAT) for Boosted Performance via Parameter Efficiency [15.211462468655329]
CLAT introduces parameter efficiency into the adversarial training process, improving both clean accuracy and adversarial robustness.
It can be applied on top of existing adversarial training methods, significantly reducing the number of trainable parameters by approximately 95%.
arXiv Detail & Related papers (2024-08-19T17:58:03Z) - Stability and Generalization in Free Adversarial Training [9.831489366502302]
We study the generalization performance of adversarial training methods using the algorithmic stability framework.
Our proven generalization bounds indicate that the free adversarial training method could enjoy a lower generalization gap between training and test samples.
arXiv Detail & Related papers (2024-04-13T12:07:20Z) - Enhancing Adversarial Training via Reweighting Optimization Trajectory [72.75558017802788]
A number of approaches have been proposed to address drawbacks such as extra regularization, adversarial weights, and training with more data.
We propose a new method named textbfWeighted Optimization Trajectories (WOT) that leverages the optimization trajectories of adversarial training in time.
Our results show that WOT integrates seamlessly with the existing adversarial training methods and consistently overcomes the robust overfitting issue.
arXiv Detail & Related papers (2023-06-25T15:53:31Z) - Adversarial Training Should Be Cast as a Non-Zero-Sum Game [121.95628660889628]
Two-player zero-sum paradigm of adversarial training has not engendered sufficient levels of robustness.
We show that the commonly used surrogate-based relaxation used in adversarial training algorithms voids all guarantees on robustness.
A novel non-zero-sum bilevel formulation of adversarial training yields a framework that matches and in some cases outperforms state-of-the-art attacks.
arXiv Detail & Related papers (2023-06-19T16:00:48Z) - CAT:Collaborative Adversarial Training [80.55910008355505]
We propose a collaborative adversarial training framework to improve the robustness of neural networks.
Specifically, we use different adversarial training methods to train robust models and let models interact with their knowledge during the training process.
Cat achieves state-of-the-art adversarial robustness without using any additional data on CIFAR-10 under the Auto-Attack benchmark.
arXiv Detail & Related papers (2023-03-27T05:37:43Z) - A Novel Noise Injection-based Training Scheme for Better Model
Robustness [9.749718440407811]
Noise injection-based method has been shown to be able to improve the robustness of artificial neural networks.
In this work, we propose a novel noise injection-based training scheme for better model robustness.
Experiment results show that our proposed method achieves a much better performance on adversarial robustness and slightly better performance on original accuracy.
arXiv Detail & Related papers (2023-02-17T02:50:25Z) - Efficient and Effective Augmentation Strategy for Adversarial Training [48.735220353660324]
Adversarial training of Deep Neural Networks is known to be significantly more data-hungry than standard training.
We propose Diverse Augmentation-based Joint Adversarial Training (DAJAT) to use data augmentations effectively in adversarial training.
arXiv Detail & Related papers (2022-10-27T10:59:55Z) - Adversarial Coreset Selection for Efficient Robust Training [11.510009152620666]
We show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training.
We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2-3 times.
arXiv Detail & Related papers (2022-09-13T07:37:53Z) - Efficient Robust Training via Backward Smoothing [125.91185167854262]
Adversarial training is the most effective strategy in defending against adversarial examples.
It suffers from high computational costs due to the iterative adversarial attacks in each training step.
Recent studies show that it is possible to achieve fast Adversarial Training by performing a single-step attack.
arXiv Detail & Related papers (2020-10-03T04:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.