Meta Adaptive Task Sampling for Few-Domain Generalization
- URL: http://arxiv.org/abs/2305.15644v1
- Date: Thu, 25 May 2023 01:44:09 GMT
- Title: Meta Adaptive Task Sampling for Few-Domain Generalization
- Authors: Zheyan Shen, Han Yu, Peng Cui, Jiashuo Liu, Xingxuan Zhang, Linjun
Zhou, Furui Liu
- Abstract summary: Few-domain generalization (FDG) aims to learn a generalizable model from very few domains of novel tasks.
We propose a Meta Adaptive Task Sampling (MATS) procedure to differentiate base tasks according to their semantic and domain-shift similarity to the novel task.
- Score: 43.2043988610497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To ensure the out-of-distribution (OOD) generalization performance,
traditional domain generalization (DG) methods resort to training on data from
multiple sources with different underlying distributions. And the success of
those DG methods largely depends on the fact that there are diverse training
distributions. However, it usually needs great efforts to obtain enough
heterogeneous data due to the high expenses, privacy issues or the scarcity of
data. Thus an interesting yet seldom investigated problem arises: how to
improve the OOD generalization performance when the perceived heterogeneity is
limited. In this paper, we instantiate a new framework called few-domain
generalization (FDG), which aims to learn a generalizable model from very few
domains of novel tasks with the knowledge acquired from previous learning
experiences on base tasks. Moreover, we propose a Meta Adaptive Task Sampling
(MATS) procedure to differentiate base tasks according to their semantic and
domain-shift similarity to the novel task. Empirically, we show that the newly
introduced FDG framework can substantially improve the OOD generalization
performance on the novel task and further combining MATS with episodic training
could outperform several state-of-the-art DG baselines on widely used
benchmarks like PACS and DomainNet.
Related papers
- LFME: A Simple Framework for Learning from Multiple Experts in Domain Generalization [61.16890890570814]
Domain generalization (DG) methods aim to maintain good performance in an unseen target domain by using training data from multiple source domains.
This work introduces a simple yet effective framework, dubbed learning from multiple experts (LFME) that aims to make the target model an expert in all source domains to improve DG.
arXiv Detail & Related papers (2024-10-22T13:44:10Z) - PracticalDG: Perturbation Distillation on Vision-Language Models for Hybrid Domain Generalization [24.413415998529754]
We propose a new benchmark Hybrid Domain Generalization (HDG) and a novel metric $H2$-CV, which construct various splits to assess the robustness of algorithms.
Our method outperforms state-of-the-art algorithms on multiple datasets, especially improving the robustness when confronting data scarcity.
arXiv Detail & Related papers (2024-04-13T13:41:13Z) - Towards Reliable Domain Generalization: A New Dataset and Evaluations [45.68339440942477]
We propose a new domain generalization task for handwritten Chinese character recognition (HCCR)
We evaluate eighteen DG methods on the proposed PaHCC dataset and show that the performance of existing methods is still unsatisfactory.
Our dataset and evaluations bring new perspectives to the community for more substantial progress.
arXiv Detail & Related papers (2023-09-12T11:29:12Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - MultiMatch: Multi-task Learning for Semi-supervised Domain Generalization [55.06956781674986]
We resort to solving the semi-supervised domain generalization task, where there are a few label information in each source domain.
We propose MultiMatch, extending FixMatch to the multi-task learning framework, producing the high-quality pseudo-label for SSDG.
A series of experiments validate the effectiveness of the proposed method, and it outperforms the existing semi-supervised methods and the SSDG method on several benchmark DG datasets.
arXiv Detail & Related papers (2022-08-11T14:44:33Z) - Contrastive Knowledge-Augmented Meta-Learning for Few-Shot
Classification [28.38744876121834]
We introduce CAML (Contrastive Knowledge-Augmented Meta Learning), a novel approach for knowledge-enhanced few-shot learning.
We evaluate the performance of CAML in different few-shot learning scenarios.
arXiv Detail & Related papers (2022-07-25T17:01:29Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Reappraising Domain Generalization in Neural Networks [8.06370138649329]
Domain generalization (DG) of machine learning algorithms is defined as their ability to learn a domain agnostic hypothesis from multiple training distributions.
We find that a straightforward Empirical Risk Minimization (ERM) baseline consistently outperforms existing DG methods.
We propose a classwise-DG formulation, where for each class, we randomly select one of the domains and keep it aside for testing.
arXiv Detail & Related papers (2021-10-15T10:06:40Z) - Multi-Domain Adversarial Feature Generalization for Person
Re-Identification [52.835955258959785]
We propose a multi-dataset feature generalization network (MMFA-AAE)
It is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to unseen' camera systems.
It also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2020-11-25T08:03:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.