Domain-Irrelevant Representation Learning for Unsupervised Domain
Generalization
- URL: http://arxiv.org/abs/2107.06219v1
- Date: Tue, 13 Jul 2021 16:20:50 GMT
- Title: Domain-Irrelevant Representation Learning for Unsupervised Domain
Generalization
- Authors: Xingxuan Zhang, Linjun Zhou, Renzhe Xu, Peng Cui, Zheyan Shen, Haoxin
Liu
- Abstract summary: Domain generalization (DG) aims to help models trained on a set of source domains generalize better on unseen target domains.
While unlabeled data are far more accessible, we seek to explore how unsupervised learning can help deep models generalizes across domains.
We propose a Domain-Irrelevant Unsupervised Learning (DIUL) method to cope with the significant and misleading heterogeneity within unlabeled data.
- Score: 22.980607134596077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization (DG) aims to help models trained on a set of source
domains generalize better on unseen target domains. The performances of current
DG methods largely rely on sufficient labeled data, which however are usually
costly or unavailable. While unlabeled data are far more accessible, we seek to
explore how unsupervised learning can help deep models generalizes across
domains. Specifically, we study a novel generalization problem called
unsupervised domain generalization, which aims to learn generalizable models
with unlabeled data. Furthermore, we propose a Domain-Irrelevant Unsupervised
Learning (DIUL) method to cope with the significant and misleading
heterogeneity within unlabeled data and severe distribution shifts between
source and target data. Surprisingly we observe that DIUL can not only
counterbalance the scarcity of labeled data but also further strengthen the
generalization ability of models when the labeled data are sufficient. As a
pretraining approach, DIUL shows superior to ImageNet pretraining protocol even
when the available data are unlabeled and of a greatly smaller amount compared
to ImageNet. Extensive experiments clearly demonstrate the effectiveness of our
method compared with state-of-the-art unsupervised learning counterparts.
Related papers
- Disentangling Masked Autoencoders for Unsupervised Domain Generalization [57.56744870106124]
Unsupervised domain generalization is fast gaining attention but is still far from well-studied.
Disentangled Masked Auto (DisMAE) aims to discover the disentangled representations that faithfully reveal intrinsic features.
DisMAE co-trains the asymmetric dual-branch architecture with semantic and lightweight variation encoders.
arXiv Detail & Related papers (2024-07-10T11:11:36Z) - Overcoming Data Inequality across Domains with Semi-Supervised Domain
Generalization [4.921899151930171]
We propose a novel algorithm, ProUD, which can effectively learn domain-invariant features via domain-aware prototypes.
Our experiments on three different benchmark datasets demonstrate the effectiveness of ProUD.
arXiv Detail & Related papers (2024-03-08T10:49:37Z) - Improving Pseudo-labelling and Enhancing Robustness for Semi-Supervised Domain Generalization [7.9776163947539755]
We study the problem of Semi-Supervised Domain Generalization which is crucial for real-world applications like automated healthcare.
We propose new SSDG approach, which utilizes a novel uncertainty-guided pseudo-labelling with model averaging.
Our uncertainty-guided pseudo-labelling (UPL) uses model uncertainty to improve pseudo-labelling selection, addressing poor model calibration under multi-source unlabelled data.
arXiv Detail & Related papers (2024-01-25T05:55:44Z) - When Neural Networks Fail to Generalize? A Model Sensitivity Perspective [82.36758565781153]
Domain generalization (DG) aims to train a model to perform well in unseen domains under different distributions.
This paper considers a more realistic yet more challenging scenario, namely Single Domain Generalization (Single-DG)
We empirically ascertain a property of a model that correlates strongly with its generalization that we coin as "model sensitivity"
We propose a novel strategy of Spectral Adversarial Data Augmentation (SADA) to generate augmented images targeted at the highly sensitive frequencies.
arXiv Detail & Related papers (2022-12-01T20:15:15Z) - On-Device Domain Generalization [93.79736882489982]
Domain generalization is critical to on-device machine learning applications.
We find that knowledge distillation is a strong candidate for solving the problem.
We propose a simple idea called out-of-distribution knowledge distillation (OKD), which aims to teach the student how the teacher handles (synthetic) out-of-distribution data.
arXiv Detail & Related papers (2022-09-15T17:59:31Z) - On Certifying and Improving Generalization to Unseen Domains [87.00662852876177]
Domain Generalization aims to learn models whose performance remains high on unseen domains encountered at test-time.
It is challenging to evaluate DG algorithms comprehensively using a few benchmark datasets.
We propose a universal certification framework that can efficiently certify the worst-case performance of any DG method.
arXiv Detail & Related papers (2022-06-24T16:29:43Z) - Towards Data-Free Domain Generalization [12.269045654957765]
How can knowledge contained in models trained on different source data domains be merged into a single model that generalizes well to unseen target domains?
Prior domain generalization methods typically rely on using source domain data, making them unsuitable for private decentralized data.
We propose DEKAN, an approach that extracts and fuses domain-specific knowledge from the available teacher models into a student model robust to domain shift.
arXiv Detail & Related papers (2021-10-09T11:44:05Z) - Domain-Specific Bias Filtering for Single Labeled Domain Generalization [19.679447374738498]
Domain generalization utilizes multiple labeled source datasets to train a generalizable model for unseen target domains.
Due to expensive annotation costs, the requirements of labeling all the source data are hard to be met in real-world applications.
We propose a novel method called Domain-Specific Bias Filtering (DSBF), which filters out its domain-specific bias with the unlabeled source data.
arXiv Detail & Related papers (2021-10-02T05:08:01Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.