Domain Generalization via Semi-supervised Meta Learning
- URL: http://arxiv.org/abs/2009.12658v2
- Date: Wed, 30 Sep 2020 07:47:30 GMT
- Title: Domain Generalization via Semi-supervised Meta Learning
- Authors: Hossein Sharifi-Noghabi, Hossein Asghari, Nazanin Mehrasa, Martin
Ester
- Abstract summary: We propose the first method of domain generalization to leverage unlabeled samples.
It is trained by a meta learning approach to mimic the distribution shift between the input source domains and unseen target domains.
Experimental results on benchmark datasets indicate that DG outperforms state-of-the-art domain generalization and semi-supervised learning methods.
- Score: 7.722498348924133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of domain generalization is to learn from multiple source domains to
generalize to unseen target domains under distribution discrepancy. Current
state-of-the-art methods in this area are fully supervised, but for many
real-world problems it is hardly possible to obtain enough labeled samples. In
this paper, we propose the first method of domain generalization to leverage
unlabeled samples, combining of meta learning's episodic training and
semi-supervised learning, called DGSML. DGSML employs an entropy-based
pseudo-labeling approach to assign labels to unlabeled samples and then
utilizes a novel discrepancy loss to ensure that class centroids before and
after labeling unlabeled samples are close to each other. To learn a
domain-invariant representation, it also utilizes a novel alignment loss to
ensure that the distance between pairs of class centroids, computed after
adding the unlabeled samples, is preserved across different domains. DGSML is
trained by a meta learning approach to mimic the distribution shift between the
input source domains and unseen target domains. Experimental results on
benchmark datasets indicate that DGSML outperforms state-of-the-art domain
generalization and semi-supervised learning methods.
Related papers
- Adaptive Betweenness Clustering for Semi-Supervised Domain Adaptation [108.40945109477886]
We propose a novel SSDA approach named Graph-based Adaptive Betweenness Clustering (G-ABC) for achieving categorical domain alignment.
Our method outperforms previous state-of-the-art SSDA approaches, demonstrating the superiority of the proposed G-ABC algorithm.
arXiv Detail & Related papers (2024-01-21T09:57:56Z) - Better Pseudo-label: Joint Domain-aware Label and Dual-classifier for
Semi-supervised Domain Generalization [26.255457629490135]
We propose a novel framework via joint domain-aware labels and dual-classifier to produce high-quality pseudo-labels.
To predict accurate pseudo-labels under domain shift, a domain-aware pseudo-labeling module is developed.
Also, considering inconsistent goals between generalization and pseudo-labeling, we employ a dual-classifier to independently perform pseudo-labeling and domain generalization in the training process.
arXiv Detail & Related papers (2021-10-10T15:17:27Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain
Adaptation [22.852237073492894]
Unsupervised Domain Adaptation (UDA) aims to generalize the knowledge learned from a well-labeled source domain to an unlabeled target domain.
We propose a cross-domain discrepancy minimization (CGDM) method which explicitly minimizes the discrepancy of gradients generated by source samples and target samples.
In order to compute the gradient signal of target samples, we further obtain target pseudo labels through a clustering-based self-supervised learning.
arXiv Detail & Related papers (2021-06-08T07:35:40Z) - Unsupervised domain adaptation via double classifiers based on high
confidence pseudo label [8.132250810529873]
Unsupervised domain adaptation (UDA) aims to solve the problem of knowledge transfer from labeled source domain to unlabeled target domain.
Many domain adaptation (DA) methods use centroid to align the local distribution of different domains, that is, to align different classes.
This work rethinks what is the alignment between different domains, and studies how to achieve the real alignment between different domains.
arXiv Detail & Related papers (2021-05-11T00:51:31Z) - Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation [85.6961770631173]
In semi-supervised domain adaptation, a few labeled samples per class in the target domain guide features of the remaining target samples to aggregate around them.
We propose a novel approach called Cross-domain Adaptive Clustering to address this problem.
arXiv Detail & Related papers (2021-04-19T16:07:32Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.