Learning Gradient-based Mixup towards Flatter Minima for Domain
Generalization
- URL: http://arxiv.org/abs/2209.14742v1
- Date: Thu, 29 Sep 2022 13:01:14 GMT
- Title: Learning Gradient-based Mixup towards Flatter Minima for Domain
Generalization
- Authors: Danni Peng, Sinno Jialin Pan
- Abstract summary: We develop a new domain generalization algorithm named Flatness-aware Gradient-based Mixup (FGMix)
FGMix learns the similarity function towards flatter minima for better generalization.
On the DomainBed benchmark, we validate the efficacy of various designs of FGMix and demonstrate its superiority over other DG algorithms.
- Score: 44.04047359057987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To address the distribution shifts between training and test data, domain
generalization (DG) leverages multiple source domains to learn a model that
generalizes well to unseen domains. However, existing DG methods generally
suffer from overfitting to the source domains, partly due to the limited
coverage of the expected region in feature space. Motivated by this, we propose
to perform mixup with data interpolation and extrapolation to cover the
potential unseen regions. To prevent the detrimental effects of unconstrained
extrapolation, we carefully design a policy to generate the instance weights,
named Flatness-aware Gradient-based Mixup (FGMix). The policy employs a
gradient-based similarity to assign greater weights to instances that carry
more invariant information, and learns the similarity function towards flatter
minima for better generalization. On the DomainBed benchmark, we validate the
efficacy of various designs of FGMix and demonstrate its superiority over other
DG algorithms.
Related papers
- Unsupervised Domain Adaptation Using Compact Internal Representations [23.871860648919593]
A technique for tackling unsupervised domain adaptation involves mapping data points from both the source and target domains into a shared embedding space.
We develop an additional technique which makes the internal distribution of the source domain more compact.
We demonstrate that by increasing the margins between data representations for different classes in the embedding space, we can improve the model performance for UDA.
arXiv Detail & Related papers (2024-01-14T05:53:33Z) - FIXED: Frustratingly Easy Domain Generalization with Mixup [53.782029033068675]
Domain generalization (DG) aims to learn a generalizable model from multiple training domains such that it can perform well on unseen target domains.
A popular strategy is to augment training data to benefit generalization through methods such as Mixupcitezhang 2018mixup.
We propose a simple yet effective enhancement for Mixup-based DG, namely domain-invariant Feature mIXup (FIX)
Our approach significantly outperforms nine state-of-the-art related methods, beating the best performing baseline by 6.5% on average in terms of test accuracy.
arXiv Detail & Related papers (2022-11-07T09:38:34Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Discriminative Domain-Invariant Adversarial Network for Deep Domain
Generalization [33.84004077585957]
We propose a discriminative domain-invariant adversarial network (DDIAN) for domain generalization.
DDIAN achieves better prediction on unseen target data during training compared to state-of-the-art domain generalization approaches.
arXiv Detail & Related papers (2021-08-20T04:24:12Z) - Domain Generalization via Gradient Surgery [5.38147998080533]
In real-life applications, machine learning models often face scenarios where there is a change in data distribution between training and test domains.
In this work, we characterize the conflicting gradients emerging in domain shift scenarios and devise novel gradient agreement strategies.
arXiv Detail & Related papers (2021-08-03T16:49:25Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.