Sequential Learning for Domain Generalization
- URL: http://arxiv.org/abs/2004.01377v1
- Date: Fri, 3 Apr 2020 05:10:33 GMT
- Title: Sequential Learning for Domain Generalization
- Authors: Da Li, Yongxin Yang, Yi-Zhe Song and Timothy Hospedales
- Abstract summary: We propose a sequential learning framework for Domain Generalization (DG)
We focus on its application to the recently proposed Meta-Learning Domain generalization (MLDG)
- Score: 81.70387860425855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we propose a sequential learning framework for Domain
Generalization (DG), the problem of training a model that is robust to domain
shift by design. Various DG approaches have been proposed with different
motivating intuitions, but they typically optimize for a single step of domain
generalization -- training on one set of domains and generalizing to one other.
Our sequential learning is inspired by the idea lifelong learning, where
accumulated experience means that learning the $n^{th}$ thing becomes easier
than the $1^{st}$ thing. In DG this means encountering a sequence of domains
and at each step training to maximise performance on the next domain. The
performance at domain $n$ then depends on the previous $n-1$ learning problems.
Thus backpropagating through the sequence means optimizing performance not just
for the next domain, but all following domains. Training on all such sequences
of domains provides dramatically more `practice' for a base DG learner compared
to existing approaches, thus improving performance on a true testing domain.
This strategy can be instantiated for different base DG algorithms, but we
focus on its application to the recently proposed Meta-Learning Domain
generalization (MLDG). We show that for MLDG it leads to a simple to implement
and fast algorithm that provides consistent performance improvement on a
variety of DG benchmarks.
Related papers
- Advancing Open-Set Domain Generalization Using Evidential Bi-Level Hardest Domain Scheduler [45.71475375161575]
In Open-Set Domain Generalization, the model is exposed to both new variations of data appearance (domains) and open-set conditions.
We propose the Evidential Bi-Level Hardest Domain Scheduler (EBiL-HaDS) to achieve an adaptive domain scheduler.
arXiv Detail & Related papers (2024-09-26T05:57:35Z) - Grounding Stylistic Domain Generalization with Quantitative Domain Shift Measures and Synthetic Scene Images [63.58800688320182]
Domain Generalization is a challenging task in machine learning.
Current methodology lacks quantitative understanding about shifts in stylistic domain.
We introduce a new DG paradigm to address these risks.
arXiv Detail & Related papers (2024-05-24T22:13:31Z) - MADG: Margin-based Adversarial Learning for Domain Generalization [25.45950080930517]
We propose a novel adversarial learning DG algorithm, MADG, motivated by a margin loss-based discrepancy metric.
The proposed MADG model learns domain-invariant features across all source domains and uses adversarial training to generalize well to the unseen target domain.
We extensively experiment with the MADG model on popular real-world DG datasets.
arXiv Detail & Related papers (2023-11-14T19:53:09Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Improving Multi-Domain Generalization through Domain Re-labeling [31.636953426159224]
We study the important link between pre-specified domain labels and the generalization performance.
We introduce a general approach for multi-domain generalization, MulDEns, that uses an ERM-based deep ensembling backbone.
We show that MulDEns does not require tailoring the augmentation strategy or the training process specific to a dataset.
arXiv Detail & Related papers (2021-12-17T23:21:50Z) - Reappraising Domain Generalization in Neural Networks [8.06370138649329]
Domain generalization (DG) of machine learning algorithms is defined as their ability to learn a domain agnostic hypothesis from multiple training distributions.
We find that a straightforward Empirical Risk Minimization (ERM) baseline consistently outperforms existing DG methods.
We propose a classwise-DG formulation, where for each class, we randomly select one of the domains and keep it aside for testing.
arXiv Detail & Related papers (2021-10-15T10:06:40Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Meta-Learning for Domain Generalization in Semantic Parsing [124.32975734073949]
We use a meta-learning framework which targets zero-shot domain for semantic parsing.
We apply a model-agnostic training algorithm that simulates zero-shot parsing virtual train and test sets from disjoint domains.
arXiv Detail & Related papers (2020-10-22T19:00:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.