CAT:Collaborative Adversarial Training
- URL: http://arxiv.org/abs/2303.14922v1
- Date: Mon, 27 Mar 2023 05:37:43 GMT
- Title: CAT:Collaborative Adversarial Training
- Authors: Xingbin Liu, Huafeng Kuang, Xianming Lin, Yongjian Wu, Rongrong Ji
- Abstract summary: We propose a collaborative adversarial training framework to improve the robustness of neural networks.
Specifically, we use different adversarial training methods to train robust models and let models interact with their knowledge during the training process.
Cat achieves state-of-the-art adversarial robustness without using any additional data on CIFAR-10 under the Auto-Attack benchmark.
- Score: 80.55910008355505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training can improve the robustness of neural networks. Previous
methods focus on a single adversarial training strategy and do not consider the
model property trained by different strategies. By revisiting the previous
methods, we find different adversarial training methods have distinct
robustness for sample instances. For example, a sample instance can be
correctly classified by a model trained using standard adversarial training
(AT) but not by a model trained using TRADES, and vice versa. Based on this
observation, we propose a collaborative adversarial training framework to
improve the robustness of neural networks. Specifically, we use different
adversarial training methods to train robust models and let models interact
with their knowledge during the training process. Collaborative Adversarial
Training (CAT) can improve both robustness and accuracy. Extensive experiments
on various networks and datasets validate the effectiveness of our method. CAT
achieves state-of-the-art adversarial robustness without using any additional
data on CIFAR-10 under the Auto-Attack benchmark. Code is available at
https://github.com/liuxingbin/CAT.
Related papers
- Fast Propagation is Better: Accelerating Single-Step Adversarial
Training via Sampling Subnetworks [69.54774045493227]
A drawback of adversarial training is the computational overhead introduced by the generation of adversarial examples.
We propose to exploit the interior building blocks of the model to improve efficiency.
Compared with previous methods, our method not only reduces the training cost but also achieves better model robustness.
arXiv Detail & Related papers (2023-10-24T01:36:20Z) - Adversarial Coreset Selection for Efficient Robust Training [11.510009152620666]
We show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training.
We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2-3 times.
arXiv Detail & Related papers (2022-09-13T07:37:53Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Long-term Cross Adversarial Training: A Robust Meta-learning Method for
Few-shot Classification Tasks [10.058068783476598]
This paper proposed a meta-learning method on the adversarially robust neural network called Long-term Cross Adversarial Training (LCAT)
Due to cross-adversarial training, LCAT only needs half of the adversarial training epoch than AQ, resulting in a low adversarial training epoch.
Experiment results show that LCAT achieves superior performance both on the clean and adversarial few-shot classification accuracy.
arXiv Detail & Related papers (2021-06-22T06:31:16Z) - Self-Progressing Robust Training [146.8337017922058]
Current robust training methods such as adversarial training explicitly uses an "attack" to generate adversarial examples.
We propose a new framework called SPROUT, self-progressing robust training.
Our results shed new light on scalable, effective and attack-independent robust training methods.
arXiv Detail & Related papers (2020-12-22T00:45:24Z) - Single-step Adversarial training with Dropout Scheduling [59.50324605982158]
We show that models trained using single-step adversarial training method learn to prevent the generation of single-step adversaries.
Models trained using proposed single-step adversarial training method are robust against both single-step and multi-step adversarial attacks.
arXiv Detail & Related papers (2020-04-18T14:14:00Z) - CAT: Customized Adversarial Training for Improved Robustness [142.3480998034692]
We propose a new algorithm, named Customized Adversarial Training (CAT), which adaptively customizes the perturbation level and the corresponding label for each training sample in adversarial training.
We show that the proposed algorithm achieves better clean and robust accuracy than previous adversarial training methods through extensive experiments.
arXiv Detail & Related papers (2020-02-17T06:13:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.