DiffAug: A Diffuse-and-Denoise Augmentation for Training Robust Classifiers
- URL: http://arxiv.org/abs/2306.09192v2
- Date: Wed, 29 May 2024 00:16:25 GMT
- Title: DiffAug: A Diffuse-and-Denoise Augmentation for Training Robust Classifiers
- Authors: Chandramouli Sastry, Sri Harsha Dumpala, Sageev Oore,
- Abstract summary: We introduce DiffAug, a simple and efficient diffusion-based augmentation technique to train image classifiers.
Applying DiffAug to a given example consists of one forward-diffusion step followed by one reverse-diffusion step.
- Score: 6.131022957085439
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce DiffAug, a simple and efficient diffusion-based augmentation technique to train image classifiers for the crucial yet challenging goal of improved classifier robustness. Applying DiffAug to a given example consists of one forward-diffusion step followed by one reverse-diffusion step. Using both ResNet-50 and Vision Transformer architectures, we comprehensively evaluate classifiers trained with DiffAug and demonstrate the surprising effectiveness of single-step reverse diffusion in improving robustness to covariate shifts, certified adversarial accuracy and out of distribution detection. When we combine DiffAug with other augmentations such as AugMix and DeepAugment we demonstrate further improved robustness. Finally, building on this approach, we also improve classifier-guided diffusion wherein we observe improvements in: (i) classifier-generalization, (ii) gradient quality (i.e., improved perceptual alignment) and (iii) image generation performance. We thus introduce a computationally efficient technique for training with improved robustness that does not require any additional data, and effectively complements existing augmentation approaches.
Related papers
- Feature Augmentation for Self-supervised Contrastive Learning: A Closer Look [28.350278251132078]
We propose a unified framework to conduct data augmentation in the feature space, known as feature augmentation.
This strategy is domain-agnostic, which augments similar features to the original ones and thus improves the data diversity.
arXiv Detail & Related papers (2024-10-16T09:25:11Z) - Improving the Transferability of Adversarial Examples by Feature Augmentation [6.600860987969305]
We propose a simple but effective feature augmentation attack (FAUG) method, which improves adversarial transferability without introducing extra computation costs.
Specifically, we inject the random noise into the intermediate features of the model to enlarge the diversity of the attack gradient.
Our method can be combined with existing gradient attacks to augment their performance further.
arXiv Detail & Related papers (2024-07-09T09:41:40Z) - DASA: Difficulty-Aware Semantic Augmentation for Speaker Verification [55.306583814017046]
We present a novel difficulty-aware semantic augmentation (DASA) approach for speaker verification.
DASA generates diversified training samples in speaker embedding space with negligible extra computing cost.
The best result achieves a 14.6% relative reduction in EER metric on CN-Celeb evaluation set.
arXiv Detail & Related papers (2023-10-18T17:07:05Z) - DualAug: Exploiting Additional Heavy Augmentation with OOD Data
Rejection [77.6648187359111]
We propose a novel data augmentation method, named textbfDualAug, to keep the augmentation in distribution as much as possible at a reasonable time and computational cost.
Experiments on supervised image classification benchmarks show that DualAug improve various automated data augmentation method.
arXiv Detail & Related papers (2023-10-12T08:55:10Z) - Dynamic Test-Time Augmentation via Differentiable Functions [3.686808512438363]
DynTTA is an image enhancement method that generates recognition-friendly images without retraining the recognition model.
DynTTA is based on differentiable data augmentation techniques and generates a blended image from many augmented images to improve the recognition accuracy under distribution shifts.
arXiv Detail & Related papers (2022-12-09T06:06:47Z) - Soft Augmentation for Image Classification [68.71067594724663]
We propose generalizing augmentation with invariant transforms to soft augmentation.
We show that soft targets allow for more aggressive data augmentation.
We also show that soft augmentations generalize to self-supervised classification tasks.
arXiv Detail & Related papers (2022-11-09T01:04:06Z) - AugMax: Adversarial Composition of Random Augmentations for Robust
Training [118.77956624445994]
We propose a data augmentation framework, termed AugMax, to unify the two aspects of diversity and hardness.
AugMax first randomly samples multiple augmentation operators and then learns an adversarial mixture of the selected operators.
Experiments show that AugMax-DuBIN leads to significantly improved out-of-distribution robustness, outperforming prior arts by 3.03%, 3.49%, 1.82% and 0.71%.
arXiv Detail & Related papers (2021-10-26T15:23:56Z) - Image Augmentations for GAN Training [57.65145659417266]
We provide insights and guidelines on how to augment images for both vanilla GANs and GANs with regularizations.
Surprisingly, we find that vanilla GANs attain generation quality on par with recent state-of-the-art results.
arXiv Detail & Related papers (2020-06-04T00:16:02Z) - PointAugment: an Auto-Augmentation Framework for Point Cloud
Classification [105.27565020399]
PointAugment is a new auto-augmentation framework that automatically optimize and augments point cloud samples to enrich the data diversity when we train a classification network.
We formulate a learnable point augmentation function with a shape-wise transformation and a point-wise displacement, and carefully design loss functions to adopt the augmented samples.
arXiv Detail & Related papers (2020-02-25T14:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.