AdvST: Revisiting Data Augmentations for Single Domain Generalization
- URL: http://arxiv.org/abs/2312.12720v2
- Date: Wed, 14 Feb 2024 17:15:30 GMT
- Title: AdvST: Revisiting Data Augmentations for Single Domain Generalization
- Authors: Guangtao Zheng, Mengdi Huai, Aidong Zhang
- Abstract summary: Single domain generalization aims to train a robust model against unknown target domain shifts using data from a single source domain.
Standard data augmentations with learnable parameters as semantics transformations can manipulate certain semantics of a sample.
We propose Adversarial learning with Semantics Transformations (AdvST) that augments the source domain data with semantics transformations and learns a robust model with the augmented data.
- Score: 39.55487584183931
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single domain generalization (SDG) aims to train a robust model against
unknown target domain shifts using data from a single source domain. Data
augmentation has been proven an effective approach to SDG. However, the utility
of standard augmentations, such as translate, or invert, has not been fully
exploited in SDG; practically, these augmentations are used as a part of a data
preprocessing procedure. Although it is intuitive to use many such
augmentations to boost the robustness of a model to out-of-distribution domain
shifts, we lack a principled approach to harvest the benefit brought from
multiple these augmentations. Here, we conceptualize standard data
augmentations with learnable parameters as semantics transformations that can
manipulate certain semantics of a sample, such as the geometry or color of an
image. Then, we propose Adversarial learning with Semantics Transformations
(AdvST) that augments the source domain data with semantics transformations and
learns a robust model with the augmented data. We theoretically show that AdvST
essentially optimizes a distributionally robust optimization objective defined
on a set of semantics distributions induced by the parameters of semantics
transformations. We demonstrate that AdvST can produce samples that expand the
coverage on target domain data. Compared with the state-of-the-art methods,
AdvST, despite being a simple method, is surprisingly competitive and achieves
the best average SDG performance on the Digits, PACS, and DomainNet datasets.
Our code is available at https://github.com/gtzheng/AdvST.
Related papers
- First-Order Manifold Data Augmentation for Regression Learning [4.910937238451485]
We introduce FOMA: a new data-driven domain-independent data augmentation method.
We evaluate FOMA on in-distribution generalization and out-of-distribution benchmarks, and we show that it improves the generalization of several neural architectures.
arXiv Detail & Related papers (2024-06-16T12:35:05Z) - Distribution-Aware Data Expansion with Diffusion Models [55.979857976023695]
We propose DistDiff, a training-free data expansion framework based on the distribution-aware diffusion model.
DistDiff consistently enhances accuracy across a diverse range of datasets compared to models trained solely on original data.
arXiv Detail & Related papers (2024-03-11T14:07:53Z) - DGInStyle: Domain-Generalizable Semantic Segmentation with Image Diffusion Models and Stylized Semantic Control [68.14798033899955]
Large, pretrained latent diffusion models (LDMs) have demonstrated an extraordinary ability to generate creative content.
However, are they usable as large-scale data generators, e.g., to improve tasks in the perception stack, like semantic segmentation?
We investigate this question in the context of autonomous driving, and answer it with a resounding "yes"
arXiv Detail & Related papers (2023-12-05T18:34:12Z) - Generalization by Adaptation: Diffusion-Based Domain Extension for
Domain-Generalized Semantic Segmentation [21.016364582994846]
We present a new diffusion-based domain extension (DIDEX) method.
We employ a diffusion model to generate a pseudo-target domain with diverse text prompts.
In a second step, we train a generalizing model by adapting towards this pseudo-target domain.
arXiv Detail & Related papers (2023-12-04T12:31:45Z) - Domain Generalization by Rejecting Extreme Augmentations [13.114457707388283]
We show that for out-of-domain and domain generalization settings, data augmentation can provide a conspicuous and robust improvement in performance.
We propose a simple training procedure: (i) use uniform sampling on standard data augmentation transformations; (ii) increase the strength transformations to account for the higher data variance expected when working out-of-domain, and (iii) devise a new reward function to reject extreme transformations that can harm the training.
arXiv Detail & Related papers (2023-10-10T14:46:22Z) - Adversarial and Random Transformations for Robust Domain Adaptation and
Generalization [9.995765847080596]
We show that by simply applying consistency training with random data augmentation, state-of-the-art results on domain adaptation (DA) and generalization (DG) can be obtained.
The combined adversarial and random transformations based method outperforms the state-of-the-art on multiple DA and DG benchmark datasets.
arXiv Detail & Related papers (2022-11-13T02:10:13Z) - Learning to Augment via Implicit Differentiation for Domain
Generalization [107.9666735637355]
Domain generalization (DG) aims to overcome the problem by leveraging multiple source domains to learn a domain-generalizable model.
In this paper, we propose a novel augmentation-based DG approach, dubbed AugLearn.
AugLearn shows effectiveness on three standard DG benchmarks, PACS, Office-Home and Digits-DG.
arXiv Detail & Related papers (2022-10-25T18:51:51Z) - Transformer-Based Source-Free Domain Adaptation [134.67078085569017]
We study the task of source-free domain adaptation (SFDA), where the source data are not available during target adaptation.
We propose a generic and effective framework based on Transformer, named TransDA, for learning a generalized model for SFDA.
arXiv Detail & Related papers (2021-05-28T23:06:26Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.