Domain Generalization via Optimal Transport with Metric Similarity
Learning
- URL: http://arxiv.org/abs/2007.10573v2
- Date: Mon, 4 Apr 2022 22:31:36 GMT
- Title: Domain Generalization via Optimal Transport with Metric Similarity
Learning
- Authors: Fan Zhou, Zhuqing Jiang, Changjian Shui, Boyu Wang and Brahim
Chaib-draa
- Abstract summary: Generalizing knowledge to unseen domains, where data and labels are unavailable, is crucial for machine learning models.
We tackle the domain generalization problem to learn from multiple source domains and generalize to a target domain with unknown statistics.
- Score: 16.54463315552112
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generalizing knowledge to unseen domains, where data and labels are
unavailable, is crucial for machine learning models. We tackle the domain
generalization problem to learn from multiple source domains and generalize to
a target domain with unknown statistics. The crucial idea is to extract the
underlying invariant features across all the domains. Previous domain
generalization approaches mainly focused on learning invariant features and
stacking the learned features from each source domain to generalize to a new
target domain while ignoring the label information, which will lead to
indistinguishable features with an ambiguous classification boundary. For this,
one possible solution is to constrain the label-similarity when extracting the
invariant features and to take advantage of the label similarities for
class-specific cohesion and separation of features across domains. Therefore we
adopt optimal transport with Wasserstein distance, which could constrain the
class label similarity, for adversarial training and also further deploy a
metric learning objective to leverage the label information for achieving
distinguishable classification boundary. Empirical results show that our
proposed method could outperform most of the baselines. Furthermore, ablation
studies also demonstrate the effectiveness of each component of our method.
Related papers
- Adaptive Betweenness Clustering for Semi-Supervised Domain Adaptation [108.40945109477886]
We propose a novel SSDA approach named Graph-based Adaptive Betweenness Clustering (G-ABC) for achieving categorical domain alignment.
Our method outperforms previous state-of-the-art SSDA approaches, demonstrating the superiority of the proposed G-ABC algorithm.
arXiv Detail & Related papers (2024-01-21T09:57:56Z) - Unsupervised Domain Adaptation for Point Cloud Semantic Segmentation via
Graph Matching [14.876681993079062]
We propose a graph-based framework to explore the local-level feature alignment between the two domains.
We also formulate a category-guided contrastive loss to guide the segmentation model to learn discriminative features on the target domain.
arXiv Detail & Related papers (2022-08-09T02:30:15Z) - Adaptive Methods for Aggregated Domain Generalization [26.215904177457997]
In many settings, privacy concerns prohibit obtaining domain labels for the training data samples.
We propose a domain-adaptive approach to this problem, which operates in two steps.
Our approach achieves state-of-the-art performance on a variety of domain generalization benchmarks without using domain labels.
arXiv Detail & Related papers (2021-12-09T08:57:01Z) - Domain Adaptation for Sentiment Analysis Using Increased Intraclass
Separation [31.410122245232373]
Cross-domain sentiment analysis methods have received significant attention.
We introduce a new domain adaptation method which induces large margins between different classes in an embedding space.
This embedding space is trained to be domain-agnostic by matching the data distributions across the domains.
arXiv Detail & Related papers (2021-07-04T11:39:12Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Adaptive Domain-Specific Normalization for Generalizable Person
Re-Identification [81.30327016286009]
We propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
In this work, we propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
arXiv Detail & Related papers (2021-05-07T02:54:55Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Domain Generalization via Semi-supervised Meta Learning [7.722498348924133]
We propose the first method of domain generalization to leverage unlabeled samples.
It is trained by a meta learning approach to mimic the distribution shift between the input source domains and unseen target domains.
Experimental results on benchmark datasets indicate that DG outperforms state-of-the-art domain generalization and semi-supervised learning methods.
arXiv Detail & Related papers (2020-09-26T18:05:04Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z) - Improving Domain-Adapted Sentiment Classification by Deep Adversarial
Mutual Learning [51.742040588834996]
Domain-adapted sentiment classification refers to training on a labeled source domain to well infer document-level sentiment on an unlabeled target domain.
We propose a novel deep adversarial mutual learning approach involving two groups of feature extractors, domain discriminators, sentiment classifiers, and label probers.
arXiv Detail & Related papers (2020-02-01T01:22:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.