DiffAug: A Diffuse-and-Denoise Augmentation for Training Robust Classifiers
- URL: http://arxiv.org/abs/2306.09192v2
- Date: Wed, 29 May 2024 00:16:25 GMT
- Title: DiffAug: A Diffuse-and-Denoise Augmentation for Training Robust Classifiers
- Authors: Chandramouli Sastry, Sri Harsha Dumpala, Sageev Oore,
- Abstract summary: We introduce DiffAug, a simple and efficient diffusion-based augmentation technique to train image classifiers.
Applying DiffAug to a given example consists of one forward-diffusion step followed by one reverse-diffusion step.
- Score: 6.131022957085439
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce DiffAug, a simple and efficient diffusion-based augmentation technique to train image classifiers for the crucial yet challenging goal of improved classifier robustness. Applying DiffAug to a given example consists of one forward-diffusion step followed by one reverse-diffusion step. Using both ResNet-50 and Vision Transformer architectures, we comprehensively evaluate classifiers trained with DiffAug and demonstrate the surprising effectiveness of single-step reverse diffusion in improving robustness to covariate shifts, certified adversarial accuracy and out of distribution detection. When we combine DiffAug with other augmentations such as AugMix and DeepAugment we demonstrate further improved robustness. Finally, building on this approach, we also improve classifier-guided diffusion wherein we observe improvements in: (i) classifier-generalization, (ii) gradient quality (i.e., improved perceptual alignment) and (iii) image generation performance. We thus introduce a computationally efficient technique for training with improved robustness that does not require any additional data, and effectively complements existing augmentation approaches.
Related papers
- Improving the Transferability of Adversarial Examples by Feature Augmentation [6.600860987969305]
We propose a simple but effective feature augmentation attack (FAUG) method, which improves adversarial transferability without introducing extra computation costs.
Specifically, we inject the random noise into the intermediate features of the model to enlarge the diversity of the attack gradient.
Our method can be combined with existing gradient attacks to augment their performance further.
arXiv Detail & Related papers (2024-07-09T09:41:40Z) - AdaAugment: A Tuning-Free and Adaptive Approach to Enhance Data Augmentation [12.697608744311122]
AdaAugment is a tuning-free Adaptive Augmentation method.
It dynamically adjusts augmentation magnitudes for individual training samples based on real-time feedback from the target network.
It consistently outperforms other state-of-the-art DA methods in effectiveness while maintaining remarkable efficiency.
arXiv Detail & Related papers (2024-05-19T06:54:03Z) - DASA: Difficulty-Aware Semantic Augmentation for Speaker Verification [55.306583814017046]
We present a novel difficulty-aware semantic augmentation (DASA) approach for speaker verification.
DASA generates diversified training samples in speaker embedding space with negligible extra computing cost.
The best result achieves a 14.6% relative reduction in EER metric on CN-Celeb evaluation set.
arXiv Detail & Related papers (2023-10-18T17:07:05Z) - DualAug: Exploiting Additional Heavy Augmentation with OOD Data
Rejection [77.6648187359111]
We propose a novel data augmentation method, named textbfDualAug, to keep the augmentation in distribution as much as possible at a reasonable time and computational cost.
Experiments on supervised image classification benchmarks show that DualAug improve various automated data augmentation method.
arXiv Detail & Related papers (2023-10-12T08:55:10Z) - Attribute Graph Clustering via Learnable Augmentation [71.36827095487294]
Contrastive deep graph clustering (CDGC) utilizes contrastive learning to group nodes into different clusters.
We propose an Attribute Graph Clustering method via Learnable Augmentation (textbfAGCLA), which introduces learnable augmentors for high-quality augmented samples.
arXiv Detail & Related papers (2022-12-07T10:19:39Z) - Soft Augmentation for Image Classification [68.71067594724663]
We propose generalizing augmentation with invariant transforms to soft augmentation.
We show that soft targets allow for more aggressive data augmentation.
We also show that soft augmentations generalize to self-supervised classification tasks.
arXiv Detail & Related papers (2022-11-09T01:04:06Z) - Image Augmentations for GAN Training [57.65145659417266]
We provide insights and guidelines on how to augment images for both vanilla GANs and GANs with regularizations.
Surprisingly, we find that vanilla GANs attain generation quality on par with recent state-of-the-art results.
arXiv Detail & Related papers (2020-06-04T00:16:02Z) - PointAugment: an Auto-Augmentation Framework for Point Cloud
Classification [105.27565020399]
PointAugment is a new auto-augmentation framework that automatically optimize and augments point cloud samples to enrich the data diversity when we train a classification network.
We formulate a learnable point augmentation function with a shape-wise transformation and a point-wise displacement, and carefully design loss functions to adopt the augmented samples.
arXiv Detail & Related papers (2020-02-25T14:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.