Out-of-Domain Robustness via Targeted Augmentations
- URL: http://arxiv.org/abs/2302.11861v3
- Date: Tue, 6 Feb 2024 06:04:08 GMT
- Title: Out-of-Domain Robustness via Targeted Augmentations
- Authors: Irena Gao, Shiori Sagawa, Pang Wei Koh, Tatsunori Hashimoto, Percy
Liang
- Abstract summary: We study principles for designing data augmentations for out-of-domain generalization.
Motivated by theoretical analysis on a linear setting, we propose targeted augmentations.
We show that targeted augmentations set new states-of-the-art for OOD performance by 3.2-15.2 percentage points.
- Score: 90.94290420322457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Models trained on one set of domains often suffer performance drops on unseen
domains, e.g., when wildlife monitoring models are deployed in new camera
locations. In this work, we study principles for designing data augmentations
for out-of-domain (OOD) generalization. In particular, we focus on real-world
scenarios in which some domain-dependent features are robust, i.e., some
features that vary across domains are predictive OOD. For example, in the
wildlife monitoring application above, image backgrounds vary across camera
locations but indicate habitat type, which helps predict the species of
photographed animals. Motivated by theoretical analysis on a linear setting, we
propose targeted augmentations, which selectively randomize spurious
domain-dependent features while preserving robust ones. We prove that targeted
augmentations improve OOD performance, allowing models to generalize better
with fewer domains. In contrast, existing approaches such as generic
augmentations, which fail to randomize domain-dependent features, and
domain-invariant augmentations, which randomize all domain-dependent features,
both perform poorly OOD. In experiments on three real-world datasets, we show
that targeted augmentations set new states-of-the-art for OOD performance by
3.2-15.2 percentage points.
Related papers
- DomainDrop: Suppressing Domain-Sensitive Channels for Domain
Generalization [25.940491294232956]
DomainDrop is a framework to continuously enhance the channel robustness to domain shifts.
Our framework achieves state-of-the-art performance compared to other competing methods.
arXiv Detail & Related papers (2023-08-20T14:48:52Z) - Single Domain Dynamic Generalization for Iris Presentation Attack
Detection [41.126916126040655]
Iris presentation generalization has achieved great success under intra-domain settings but easily degrades on unseen domains.
We propose a Single Domain Dynamic Generalization (SDDG) framework, which exploits domain-invariant and domain-specific features on a per-sample basis.
The proposed method is effective and outperforms the state-of-the-art on LivDet-Iris 2017 dataset.
arXiv Detail & Related papers (2023-05-22T07:54:13Z) - Domain-incremental Cardiac Image Segmentation with Style-oriented Replay
and Domain-sensitive Feature Whitening [67.6394526631557]
M&Ms should incrementally learn from each incoming dataset and progressively update with improved functionality as time goes by.
In medical scenarios, this is particularly challenging as accessing or storing past data is commonly not allowed due to data privacy.
We propose a novel domain-incremental learning framework to recover past domain inputs first and then regularly replay them during model optimization.
arXiv Detail & Related papers (2022-11-09T13:07:36Z) - Normalization Perturbation: A Simple Domain Generalization Method for
Real-World Domain Shifts [133.99270341855728]
Real-world domain styles can vary substantially due to environment changes and sensor noises.
Deep models only know the training domain style.
We propose Normalization Perturbation to overcome this domain style overfitting problem.
arXiv Detail & Related papers (2022-11-08T17:36:49Z) - TAL: Two-stream Adaptive Learning for Generalizable Person
Re-identification [115.31432027711202]
We argue that both domain-specific and domain-invariant features are crucial for improving the generalization ability of re-id models.
We name two-stream adaptive learning (TAL) to simultaneously model these two kinds of information.
Our framework can be applied to both single-source and multi-source domain generalization tasks.
arXiv Detail & Related papers (2021-11-29T01:27:42Z) - Adaptive Domain-Specific Normalization for Generalizable Person
Re-Identification [81.30327016286009]
We propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
In this work, we propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
arXiv Detail & Related papers (2021-05-07T02:54:55Z) - Out-of-Distribution Generalization Analysis via Influence Function [25.80365416547478]
The mismatch between training and target data is one major challenge for machine learning systems.
We introduce Influence Function, a classical tool from robust statistics, into the OOD generalization problem.
We show that the accuracy on test domains and the proposed index together can help us discern whether OOD algorithms are needed and whether a model achieves good OOD generalization.
arXiv Detail & Related papers (2021-01-21T09:59:55Z) - FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation [26.929772844572213]
We introduce a fixed ratio-based mixup to augment multiple intermediate domains between the source and target domain.
We train the source-dominant model and the target-dominant model that have complementary characteristics.
Through our proposed methods, the models gradually transfer domain knowledge from the source to the target domain.
arXiv Detail & Related papers (2020-11-18T11:58:19Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.