Get Fooled for the Right Reason: Improving Adversarial Robustness
through a Teacher-guided Curriculum Learning Approach
- URL: http://arxiv.org/abs/2111.00295v1
- Date: Sat, 30 Oct 2021 17:47:14 GMT
- Title: Get Fooled for the Right Reason: Improving Adversarial Robustness
through a Teacher-guided Curriculum Learning Approach
- Authors: Anindya Sarkar, Anirban Sarkar, Sowrya Gali, Vineeth N Balasubramanian
- Abstract summary: Current SOTA adversarially robust models are mostly based on adversarial training (AT) and differ only by some regularizers either at inner or outer minimization steps.
We propose a non-iterative method that enforces the following ideas during training.
Our method achieves significant performance gains with a little extra effort (10-20%) over existing AT models.
- Score: 17.654350836042813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current SOTA adversarially robust models are mostly based on adversarial
training (AT) and differ only by some regularizers either at inner maximization
or outer minimization steps. Being repetitive in nature during the inner
maximization step, they take a huge time to train. We propose a non-iterative
method that enforces the following ideas during training. Attribution maps are
more aligned to the actual object in the image for adversarially robust models
compared to naturally trained models. Also, the allowed set of pixels to
perturb an image (that changes model decision) should be restricted to the
object pixels only, which reduces the attack strength by limiting the attack
space. Our method achieves significant performance gains with a little extra
effort (10-20%) over existing AT models and outperforms all other methods in
terms of adversarial as well as natural accuracy. We have performed extensive
experimentation with CIFAR-10, CIFAR-100, and TinyImageNet datasets and
reported results against many popular strong adversarial attacks to prove the
effectiveness of our method.
Related papers
- Reducing Adversarial Training Cost with Gradient Approximation [0.3916094706589679]
We propose a new and efficient adversarial training method, adversarial training with gradient approximation (GAAT) to reduce the cost of building up robust models.
Our proposed method saves up to 60% of the training time with comparable model test accuracy on datasets.
arXiv Detail & Related papers (2023-09-18T03:55:41Z) - On Evaluating the Adversarial Robustness of Semantic Segmentation Models [0.0]
A number of adversarial training approaches have been proposed as a defense against adversarial perturbation.
We show for the first time that a number of models in previous work that are claimed to be robust are in fact not robust at all.
We then evaluate simple adversarial training algorithms that produce reasonably robust models even under our set of strong attacks.
arXiv Detail & Related papers (2023-06-25T11:45:08Z) - Class-Conditioned Transformation for Enhanced Robust Image Classification [19.738635819545554]
We propose a novel test-time threat model algorithm that enhances Adrial-versa-Trained (AT) models.
Our method operates through COnditional image transformation and DIstance-based Prediction (CODIP)
The proposed method achieves state-of-the-art results demonstrated through extensive experiments on various models, AT methods, datasets, and attack types.
arXiv Detail & Related papers (2023-03-27T17:28:20Z) - Two Heads are Better than One: Robust Learning Meets Multi-branch Models [14.72099568017039]
We propose Branch Orthogonality adveRsarial Training (BORT) to obtain state-of-the-art performance with solely the original dataset for adversarial training.
We evaluate our approach on CIFAR-10, CIFAR-100, and SVHN against ell_infty norm-bounded perturbations of size epsilon = 8/255, respectively.
arXiv Detail & Related papers (2022-08-17T05:42:59Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Optimal Transport as a Defense Against Adversarial Attacks [4.6193503399184275]
Adversarial attacks can find a human-imperceptible perturbation for a given image that will mislead a trained model.
Previous work aimed to align original and adversarial image representations in the same way as domain adaptation to improve robustness.
We propose to use a loss between distributions that faithfully reflect the ground distance.
This leads to SAT (Sinkhorn Adversarial Training), a more robust defense against adversarial attacks.
arXiv Detail & Related papers (2021-02-05T13:24:36Z) - Self-Progressing Robust Training [146.8337017922058]
Current robust training methods such as adversarial training explicitly uses an "attack" to generate adversarial examples.
We propose a new framework called SPROUT, self-progressing robust training.
Our results shed new light on scalable, effective and attack-independent robust training methods.
arXiv Detail & Related papers (2020-12-22T00:45:24Z) - Bag of Tricks for Adversarial Training [50.53525358778331]
Adrial training is one of the most effective strategies for promoting model robustness.
Recent benchmarks show that most of the proposed improvements on AT are less effective than simply early stopping the training procedure.
arXiv Detail & Related papers (2020-10-01T15:03:51Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z) - Single-step Adversarial training with Dropout Scheduling [59.50324605982158]
We show that models trained using single-step adversarial training method learn to prevent the generation of single-step adversaries.
Models trained using proposed single-step adversarial training method are robust against both single-step and multi-step adversarial attacks.
arXiv Detail & Related papers (2020-04-18T14:14:00Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.