NICO++: Towards Better Benchmarking for Domain Generalization
- URL: http://arxiv.org/abs/2204.08040v2
- Date: Thu, 21 Apr 2022 11:44:33 GMT
- Title: NICO++: Towards Better Benchmarking for Domain Generalization
- Authors: Xingxuan Zhang, Yue He, Renzhe Xu, Han Yu, Zheyan Shen, Peng Cui
- Abstract summary: We propose a large-scale benchmark with extensive labeled domains named NICO++.
We show that NICO++ shows its superior evaluation capability compared with current DG datasets.
- Score: 44.11418240848957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the remarkable performance that modern deep neural networks have
achieved on independent and identically distributed (I.I.D.) data, they can
crash under distribution shifts. Most current evaluation methods for domain
generalization (DG) adopt the leave-one-out strategy as a compromise on the
limited number of domains. We propose a large-scale benchmark with extensive
labeled domains named NICO++ along with more rational evaluation methods for
comprehensively evaluating DG algorithms. To evaluate DG datasets, we propose
two metrics to quantify covariate shift and concept shift, respectively. Two
novel generalization bounds from the perspective of data construction are
proposed to prove that limited concept shift and significant covariate shift
favor the evaluation capability for generalization. Through extensive
experiments, NICO++ shows its superior evaluation capability compared with
current DG datasets and its contribution in alleviating unfairness caused by
the leak of oracle knowledge in model selection.
Related papers
- Disentangling Masked Autoencoders for Unsupervised Domain Generalization [57.56744870106124]
Unsupervised domain generalization is fast gaining attention but is still far from well-studied.
Disentangled Masked Auto (DisMAE) aims to discover the disentangled representations that faithfully reveal intrinsic features.
DisMAE co-trains the asymmetric dual-branch architecture with semantic and lightweight variation encoders.
arXiv Detail & Related papers (2024-07-10T11:11:36Z) - PracticalDG: Perturbation Distillation on Vision-Language Models for Hybrid Domain Generalization [24.413415998529754]
We propose a new benchmark Hybrid Domain Generalization (HDG) and a novel metric $H2$-CV, which construct various splits to assess the robustness of algorithms.
Our method outperforms state-of-the-art algorithms on multiple datasets, especially improving the robustness when confronting data scarcity.
arXiv Detail & Related papers (2024-04-13T13:41:13Z) - Towards Reliable Domain Generalization: A New Dataset and Evaluations [45.68339440942477]
We propose a new domain generalization task for handwritten Chinese character recognition (HCCR)
We evaluate eighteen DG methods on the proposed PaHCC dataset and show that the performance of existing methods is still unsatisfactory.
Our dataset and evaluations bring new perspectives to the community for more substantial progress.
arXiv Detail & Related papers (2023-09-12T11:29:12Z) - On Certifying and Improving Generalization to Unseen Domains [87.00662852876177]
Domain Generalization aims to learn models whose performance remains high on unseen domains encountered at test-time.
It is challenging to evaluate DG algorithms comprehensively using a few benchmark datasets.
We propose a universal certification framework that can efficiently certify the worst-case performance of any DG method.
arXiv Detail & Related papers (2022-06-24T16:29:43Z) - Localized Adversarial Domain Generalization [83.4195658745378]
Adversarial domain generalization is a popular approach to domain generalization.
We propose localized adversarial domain generalization with space compactness maintenance(LADG)
We conduct comprehensive experiments on the Wilds DG benchmark to validate our approach.
arXiv Detail & Related papers (2022-05-09T08:30:31Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z) - Reappraising Domain Generalization in Neural Networks [8.06370138649329]
Domain generalization (DG) of machine learning algorithms is defined as their ability to learn a domain agnostic hypothesis from multiple training distributions.
We find that a straightforward Empirical Risk Minimization (ERM) baseline consistently outperforms existing DG methods.
We propose a classwise-DG formulation, where for each class, we randomly select one of the domains and keep it aside for testing.
arXiv Detail & Related papers (2021-10-15T10:06:40Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.