Simple Domain Generalization Methods are Strong Baselines for Open
Domain Generalization
- URL: http://arxiv.org/abs/2303.18031v1
- Date: Fri, 31 Mar 2023 13:08:31 GMT
- Title: Simple Domain Generalization Methods are Strong Baselines for Open
Domain Generalization
- Authors: Masashi Noguchi, Shinichi Shirakawa
- Abstract summary: Domain generalization (DG) aims to handle the domain shift situation where the target domain of the inference phase is inaccessible during model training.
This work comprehensively evaluates existing DG methods in ODG and shows that two simple DG methods, CORrelation ALignment (CORAL) and Maximum Mean Discrepancy (MMD) are competitive with DAML in several cases.
- Score: 2.5889737226898437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In real-world applications, a machine learning model is required to handle an
open-set recognition (OSR), where unknown classes appear during the inference,
in addition to a domain shift, where the distribution of data differs between
the training and inference phases. Domain generalization (DG) aims to handle
the domain shift situation where the target domain of the inference phase is
inaccessible during model training. Open domain generalization (ODG) takes into
account both DG and OSR. Domain-Augmented Meta-Learning (DAML) is a method
targeting ODG but has a complicated learning process. On the other hand,
although various DG methods have been proposed, they have not been evaluated in
ODG situations. This work comprehensively evaluates existing DG methods in ODG
and shows that two simple DG methods, CORrelation ALignment (CORAL) and Maximum
Mean Discrepancy (MMD), are competitive with DAML in several cases. In
addition, we propose simple extensions of CORAL and MMD by introducing the
techniques used in DAML, such as ensemble learning and Dirichlet mixup data
augmentation. The experimental evaluation demonstrates that the extended CORAL
and MMD can perform comparably to DAML with lower computational costs. This
suggests that the simple DG methods and their simple extensions are strong
baselines for ODG. The code used in the experiments is available at
https://github.com/shiralab/OpenDG-Eval.
Related papers
- Domain-Guided Weight Modulation for Semi-Supervised Domain Generalization [11.392783918495404]
We study the challenging problem of semi-supervised domain generalization.
The goal is to learn a domain-generalizable model while using only a small fraction of labeled data and a relatively large fraction of unlabeled data.
We propose a novel method that can facilitate the generation of accurate pseudo-labels under various domain shifts.
arXiv Detail & Related papers (2024-09-04T01:26:23Z) - Disentangling Masked Autoencoders for Unsupervised Domain Generalization [57.56744870106124]
Unsupervised domain generalization is fast gaining attention but is still far from well-studied.
Disentangled Masked Auto (DisMAE) aims to discover the disentangled representations that faithfully reveal intrinsic features.
DisMAE co-trains the asymmetric dual-branch architecture with semantic and lightweight variation encoders.
arXiv Detail & Related papers (2024-07-10T11:11:36Z) - PracticalDG: Perturbation Distillation on Vision-Language Models for Hybrid Domain Generalization [24.413415998529754]
We propose a new benchmark Hybrid Domain Generalization (HDG) and a novel metric $H2$-CV, which construct various splits to assess the robustness of algorithms.
Our method outperforms state-of-the-art algorithms on multiple datasets, especially improving the robustness when confronting data scarcity.
arXiv Detail & Related papers (2024-04-13T13:41:13Z) - MADG: Margin-based Adversarial Learning for Domain Generalization [25.45950080930517]
We propose a novel adversarial learning DG algorithm, MADG, motivated by a margin loss-based discrepancy metric.
The proposed MADG model learns domain-invariant features across all source domains and uses adversarial training to generalize well to the unseen target domain.
We extensively experiment with the MADG model on popular real-world DG datasets.
arXiv Detail & Related papers (2023-11-14T19:53:09Z) - On Certifying and Improving Generalization to Unseen Domains [87.00662852876177]
Domain Generalization aims to learn models whose performance remains high on unseen domains encountered at test-time.
It is challenging to evaluate DG algorithms comprehensively using a few benchmark datasets.
We propose a universal certification framework that can efficiently certify the worst-case performance of any DG method.
arXiv Detail & Related papers (2022-06-24T16:29:43Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Reappraising Domain Generalization in Neural Networks [8.06370138649329]
Domain generalization (DG) of machine learning algorithms is defined as their ability to learn a domain agnostic hypothesis from multiple training distributions.
We find that a straightforward Empirical Risk Minimization (ERM) baseline consistently outperforms existing DG methods.
We propose a classwise-DG formulation, where for each class, we randomly select one of the domains and keep it aside for testing.
arXiv Detail & Related papers (2021-10-15T10:06:40Z) - Domain Generalisation with Domain Augmented Supervised Contrastive
Learning (Student Abstract) [17.865068872754293]
This project proposes a new method that combines data augmentation and domain distance minimisation to address the problems associated with data augmentation and provide a guarantee on the learning performance.
Empirically, our method outperforms baseline results on DG benchmarks.
arXiv Detail & Related papers (2020-12-27T16:50:40Z) - Class-Incremental Domain Adaptation [56.72064953133832]
We introduce a practical Domain Adaptation (DA) paradigm called Class-Incremental Domain Adaptation (CIDA)
Existing DA methods tackle domain-shift but are unsuitable for learning novel target-domain classes.
Our approach yields superior performance as compared to both DA and CI methods in the CIDA paradigm.
arXiv Detail & Related papers (2020-08-04T07:55:03Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.