Gradient Matching for Domain Generalization
- URL: http://arxiv.org/abs/2104.09937v1
- Date: Tue, 20 Apr 2021 12:55:37 GMT
- Title: Gradient Matching for Domain Generalization
- Authors: Yuge Shi, Jeffrey Seely, Philip H.S. Torr, N. Siddharth, Awni Hannun,
Nicolas Usunier, Gabriel Synnaeve
- Abstract summary: A critical requirement of machine learning systems is their ability to generalize to unseen domains.
We propose an inter-domain gradient matching objective that targets domain generalization.
We derive a simpler first-order algorithm named Fish that approximates its optimization.
- Score: 93.04545793814486
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning systems typically assume that the distributions of training
and test sets match closely. However, a critical requirement of such systems in
the real world is their ability to generalize to unseen domains. Here, we
propose an inter-domain gradient matching objective that targets domain
generalization by maximizing the inner product between gradients from different
domains. Since direct optimization of the gradient inner product can be
computationally prohibitive -- requires computation of second-order derivatives
-- we derive a simpler first-order algorithm named Fish that approximates its
optimization. We demonstrate the efficacy of Fish on 6 datasets from the Wilds
benchmark, which captures distribution shift across a diverse range of
modalities. Our method produces competitive results on these datasets and
surpasses all baselines on 4 of them. We perform experiments on both the Wilds
benchmark, which captures distribution shift in the real world, as well as
datasets in DomainBed benchmark that focuses more on synthetic-to-real
transfer. Our method produces competitive results on both benchmarks,
demonstrating its effectiveness across a wide range of domain generalization
tasks.
Related papers
- Improving Domain Adaptation Through Class Aware Frequency Transformation [15.70058524548143]
Most of the Unsupervised Domain Adaptation (UDA) algorithms focus on reducing the global domain shift between labelled source and unlabelled target domains.
We propose a novel approach based on traditional image processing technique Class Aware Frequency Transformation (CAFT)
CAFT utilizes pseudo label based class consistent low-frequency swapping for improving the overall performance of the existing UDA algorithms.
arXiv Detail & Related papers (2024-07-28T18:16:41Z) - Improving Multi-Domain Generalization through Domain Re-labeling [31.636953426159224]
We study the important link between pre-specified domain labels and the generalization performance.
We introduce a general approach for multi-domain generalization, MulDEns, that uses an ERM-based deep ensembling backbone.
We show that MulDEns does not require tailoring the augmentation strategy or the training process specific to a dataset.
arXiv Detail & Related papers (2021-12-17T23:21:50Z) - Frequency Spectrum Augmentation Consistency for Domain Adaptive Object
Detection [107.52026281057343]
We introduce a Frequency Spectrum Augmentation Consistency (FSAC) framework with four different low-frequency filter operations.
In the first stage, we utilize all the original and augmented source data to train an object detector.
In the second stage, augmented source and target data with pseudo labels are adopted to perform the self-training for prediction consistency.
arXiv Detail & Related papers (2021-12-16T04:07:01Z) - Fishr: Invariant Gradient Variances for Out-of-distribution
Generalization [98.40583494166314]
Fishr is a learning scheme to enforce domain invariance in the space of the gradients of the loss function.
Fishr exhibits close relations with the Fisher Information and the Hessian of the loss.
In particular, Fishr improves the state of the art on the DomainBed benchmark and performs significantly better than Empirical Risk Minimization.
arXiv Detail & Related papers (2021-09-07T08:36:09Z) - Coarse to Fine: Domain Adaptive Crowd Counting via Adversarial Scoring
Network [58.05473757538834]
This paper proposes a novel adversarial scoring network (ASNet) to bridge the gap across domains from coarse to fine granularity.
Three sets of migration experiments show that the proposed methods achieve state-of-the-art counting performance.
arXiv Detail & Related papers (2021-07-27T14:47:24Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Adaptive Methods for Real-World Domain Generalization [32.030688845421594]
In our work, we investigate whether it is possible to leverage domain information from unseen test samples themselves.
We propose a domain-adaptive approach consisting of two steps: a) we first learn a discriminative domain embedding from unsupervised training examples, and b) use this domain embedding as supplementary information to build a domain-adaptive model.
Our approach achieves state-of-the-art performance on various domain generalization benchmarks.
arXiv Detail & Related papers (2021-03-29T17:44:35Z) - Robust Domain-Free Domain Generalization with Class-aware Alignment [4.442096198968069]
Domain-Free Domain Generalization (DFDG) is a model-agnostic method to achieve better generalization performance on the unseen test domain.
DFDG uses novel strategies to learn domain-invariant class-discriminative features.
It obtains competitive performance on both time series sensor and image classification public datasets.
arXiv Detail & Related papers (2021-02-17T17:46:06Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z) - A simple baseline for domain adaptation using rotation prediction [17.539027866457673]
The goal is to adapt a model trained in one domain to another domain with scarce annotated data.
We propose a simple yet effective method based on self-supervised learning.
Our simple method achieves state-of-the-art results on semi-supervised domain adaptation on DomainNet dataset.
arXiv Detail & Related papers (2019-12-26T17:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.