Domain Generalization by Rejecting Extreme Augmentations
- URL: http://arxiv.org/abs/2310.06670v1
- Date: Tue, 10 Oct 2023 14:46:22 GMT
- Title: Domain Generalization by Rejecting Extreme Augmentations
- Authors: Masih Aminbeidokhti, Fidel A. Guerrero Pe\~na, Heitor Rapela Medeiros,
Thomas Dubail, Eric Granger, Marco Pedersoli
- Abstract summary: We show that for out-of-domain and domain generalization settings, data augmentation can provide a conspicuous and robust improvement in performance.
We propose a simple training procedure: (i) use uniform sampling on standard data augmentation transformations; (ii) increase the strength transformations to account for the higher data variance expected when working out-of-domain, and (iii) devise a new reward function to reject extreme transformations that can harm the training.
- Score: 13.114457707388283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data augmentation is one of the most effective techniques for regularizing
deep learning models and improving their recognition performance in a variety
of tasks and domains. However, this holds for standard in-domain settings, in
which the training and test data follow the same distribution. For the
out-of-domain case, where the test data follow a different and unknown
distribution, the best recipe for data augmentation is unclear. In this paper,
we show that for out-of-domain and domain generalization settings, data
augmentation can provide a conspicuous and robust improvement in performance.
To do that, we propose a simple training procedure: (i) use uniform sampling on
standard data augmentation transformations; (ii) increase the strength
transformations to account for the higher data variance expected when working
out-of-domain, and (iii) devise a new reward function to reject extreme
transformations that can harm the training. With this procedure, our data
augmentation scheme achieves a level of accuracy that is comparable to or
better than state-of-the-art methods on benchmark domain generalization
datasets. Code: \url{https://github.com/Masseeh/DCAug}
Related papers
- First-Order Manifold Data Augmentation for Regression Learning [4.910937238451485]
We introduce FOMA: a new data-driven domain-independent data augmentation method.
We evaluate FOMA on in-distribution generalization and out-of-distribution benchmarks, and we show that it improves the generalization of several neural architectures.
arXiv Detail & Related papers (2024-06-16T12:35:05Z) - AdvST: Revisiting Data Augmentations for Single Domain Generalization [39.55487584183931]
Single domain generalization aims to train a robust model against unknown target domain shifts using data from a single source domain.
Standard data augmentations with learnable parameters as semantics transformations can manipulate certain semantics of a sample.
We propose Adversarial learning with Semantics Transformations (AdvST) that augments the source domain data with semantics transformations and learns a robust model with the augmented data.
arXiv Detail & Related papers (2023-12-20T02:29:31Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Augmentation-Aware Self-Supervision for Data-Efficient GAN Training [68.81471633374393]
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.
We propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data.
We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures.
arXiv Detail & Related papers (2022-05-31T10:35:55Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - VisDA-2021 Competition Universal Domain Adaptation to Improve
Performance on Out-of-Distribution Data [64.91713686654805]
The Visual Domain Adaptation (VisDA) 2021 competition tests models' ability to adapt to novel test distributions.
We will evaluate adaptation to novel viewpoints, backgrounds, modalities and degradation in quality.
Performance will be measured using a rigorous protocol, comparing to state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-07-23T03:21:51Z) - Gradient Matching for Domain Generalization [93.04545793814486]
A critical requirement of machine learning systems is their ability to generalize to unseen domains.
We propose an inter-domain gradient matching objective that targets domain generalization.
We derive a simpler first-order algorithm named Fish that approximates its optimization.
arXiv Detail & Related papers (2021-04-20T12:55:37Z) - Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised
Pre-Training [67.71228426496013]
We show that using target domain data during pre-training leads to large performance improvements across a variety of setups.
We find that pre-training on multiple domains improves performance generalization on domains not seen during training.
arXiv Detail & Related papers (2021-04-02T12:53:15Z) - A Batch Normalization Classifier for Domain Adaptation [0.0]
Adapting a model to perform well on unforeseen data outside its training set is a common problem that continues to motivate new approaches.
We demonstrate that application of batch normalization in the output layer, prior to softmax activation, results in improved generalization across visual data domains in a refined ResNet model.
arXiv Detail & Related papers (2021-03-22T08:03:44Z) - On the Generalization Effects of Linear Transformations in Data
Augmentation [32.01435459892255]
Data augmentation is a powerful technique to improve performance in applications such as image and text classification tasks.
We study a family of linear transformations and study their effects on the ridge estimator in an over-parametrized linear regression setting.
We propose an augmentation scheme that searches over the space of transformations by how uncertain the model is about the transformed data.
arXiv Detail & Related papers (2020-05-02T04:10:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.