Topology-Preserving Adversarial Training
- URL: http://arxiv.org/abs/2311.17607v1
- Date: Wed, 29 Nov 2023 13:05:06 GMT
- Title: Topology-Preserving Adversarial Training
- Authors: Xiaoyue Mi, Fan Tang, Yepeng Weng, Danding Wang, Juan Cao, Sheng Tang,
Peng Li, Yang Liu
- Abstract summary: Adversarial training has suffered from the natural accuracy degradation problem.
We propose Topology-pReserving Adversarial traINing (TRAIN) to alleviate the problem.
Our proposed method achieves up to 8.78% improvement in natural accuracy and 4.50% improvement in robust accuracy.
- Score: 28.129537658382848
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the effectiveness in improving the robustness of neural networks,
adversarial training has suffered from the natural accuracy degradation
problem, i.e., accuracy on natural samples has reduced significantly. In this
study, we reveal that natural accuracy degradation is highly related to the
disruption of the natural sample topology in the representation space by
quantitative and qualitative experiments. Based on this observation, we propose
Topology-pReserving Adversarial traINing (TRAIN) to alleviate the problem by
preserving the topology structure of natural samples from a standard model
trained only on natural samples during adversarial training. As an additional
regularization, our method can easily be combined with various popular
adversarial training algorithms in a plug-and-play manner, taking advantage of
both sides. Extensive experiments on CIFAR-10, CIFAR-100, and Tiny ImageNet
show that our proposed method achieves consistent and significant improvements
over various strong baselines in most cases. Specifically, without additional
data, our proposed method achieves up to 8.78% improvement in natural accuracy
and 4.50% improvement in robust accuracy.
Related papers
- Robust Neural Pruning with Gradient Sampling Optimization for Residual Neural Networks [0.0]
This research embarks on pioneering the integration of gradient sampling optimization techniques, particularly StochGradAdam, into the pruning process of neural networks.
Our main objective is to address the significant challenge of maintaining accuracy in pruned neural models, critical in resource-constrained scenarios.
arXiv Detail & Related papers (2023-12-26T12:19:22Z) - Debias the Training of Diffusion Models [53.49637348771626]
We provide theoretical evidence that the prevailing practice of using a constant loss weight strategy in diffusion models leads to biased estimation during the training phase.
We propose an elegant and effective weighting strategy grounded in the theoretically unbiased principle.
These analyses are expected to advance our understanding and demystify the inner workings of diffusion models.
arXiv Detail & Related papers (2023-10-12T16:04:41Z) - Splitting the Difference on Adversarial Training [13.470640587945057]
adversarial training is one of the most effective defenses against adversarial examples.
In this work, we take a fundamentally different approach by treating the perturbed examples of each class as a separate class to be learned.
This split doubles the number of classes to be learned, but at the same time considerably simplifies the decision boundaries.
arXiv Detail & Related papers (2023-10-03T23:09:47Z) - Beyond Empirical Risk Minimization: Local Structure Preserving
Regularization for Improving Adversarial Robustness [28.853413482357634]
Local Structure Preserving (LSP) regularization aims to preserve the local structure of the input space in the learned embedding space.
In this work, we propose a novel Local Structure Preserving (LSP) regularization, which aims to preserve the local structure of the input space in the learned embedding space.
arXiv Detail & Related papers (2023-03-29T17:18:58Z) - RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and
Out Distribution Robustness [94.69774317059122]
We show that the effectiveness of the well celebrated Mixup can be further improved if instead of using it as the sole learning objective, it is utilized as an additional regularizer to the standard cross-entropy loss.
This simple change not only provides much improved accuracy but also significantly improves the quality of the predictive uncertainty estimation of Mixup.
arXiv Detail & Related papers (2022-06-29T09:44:33Z) - (Certified!!) Adversarial Robustness for Free! [116.6052628829344]
We certify 71% accuracy on ImageNet under adversarial perturbations constrained to be within a 2-norm of 0.5.
We obtain these results using only pretrained diffusion models and image classifiers, without requiring any fine tuning or retraining of model parameters.
arXiv Detail & Related papers (2022-06-21T17:27:27Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Natural Perturbed Training for General Robustness of Neural Network
Classifiers [0.0]
Natural perturbed learning show better and much faster performance than adversarial training on clean, adversarial as well as natural perturbed images.
For Cifar-10 and STL-10 natural perturbed training even improves the accuracy for clean data and reaches the state of the art performance.
arXiv Detail & Related papers (2021-03-21T11:47:38Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.