Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation
- URL: http://arxiv.org/abs/2010.04647v3
- Date: Sun, 4 Apr 2021 18:10:56 GMT
- Title: Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation
- Authors: Bo Li and Yezhen Wang, Shanghang Zhang, Dongsheng Li, Trevor Darrell,
Kurt Keutzer, Han Zhao
- Abstract summary: We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
- Score: 109.73983088432364
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The success of supervised learning hinges on the assumption that the training
and test data come from the same underlying distribution, which is often not
valid in practice due to potential distribution shift. In light of this, most
existing methods for unsupervised domain adaptation focus on achieving
domain-invariant representations and small source domain error. However, recent
works have shown that this is not sufficient to guarantee good generalization
on the target domain, and in fact, is provably detrimental under label
distribution shift. Furthermore, in many real-world applications it is often
feasible to obtain a small amount of labeled data from the target domain and
use them to facilitate model training with source data. Inspired by the above
observations, in this paper we propose the first method that aims to
simultaneously learn invariant representations and risks under the setting of
semi-supervised domain adaptation (Semi-DA). First, we provide a finite sample
bound for both classification and regression problems under Semi-DA. The bound
suggests a principled way to obtain target generalization, i.e. by aligning
both the marginal and conditional distributions across domains in feature
space. Motivated by this, we then introduce the LIRR algorithm for jointly
\textbf{L}earning \textbf{I}nvariant \textbf{R}epresentations and
\textbf{R}isks. Finally, extensive experiments are conducted on both
classification and regression tasks, which demonstrates LIRR consistently
achieves state-of-the-art performance and significant improvements compared
with the methods that only learn invariant representations or invariant risks.
Related papers
- Domain Adaptation via Rebalanced Sub-domain Alignment [22.68115322836635]
Unsupervised domain adaptation (UDA) is a technique used to transfer knowledge from a labeled source domain to a related unlabeled target domain.
Many UDA methods have shown success in the past, but they often assume that the source and target domains must have identical class label distributions.
We propose a novel generalization bound that reweights source classification error by aligning source and target sub-domains.
arXiv Detail & Related papers (2023-02-03T21:30:40Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and
Curriculum Learning [19.903568227077763]
Unsupervised domain adaptation (UDA) improves classification performance on an unlabeled target domain by leveraging data from a fully labeled source domain.
We propose a model-agnostic two-stage learning framework, which greatly reduces flawed model predictions using soft pseudo-label strategy.
At the second stage, we propose a curriculum learning strategy to adaptively control the weighting between losses from the two domains.
arXiv Detail & Related papers (2021-12-03T14:47:32Z) - Mapping conditional distributions for domain adaptation under
generalized target shift [0.0]
We consider the problem of unsupervised domain adaptation (UDA) between a source and a target domain under conditional and label shift a.k.a Generalized Target Shift (GeTarS)
Recent approaches learn domain-invariant representations, yet they have practical limitations and rely on strong assumptions that may not hold in practice.
In this paper, we explore a novel and general approach to align pretrained representations, which circumvents existing drawbacks.
arXiv Detail & Related papers (2021-10-26T14:25:07Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z) - Discriminative Feature Alignment: Improving Transferability of
Unsupervised Domain Adaptation by Gaussian-guided Latent Alignment [27.671964294233756]
In this study, we focus on the unsupervised domain adaptation problem where an approximate inference model is to be learned from a labeled data domain.
The success of unsupervised domain adaptation largely relies on the cross-domain feature alignment.
We introduce a Gaussian-guided latent alignment approach to align the latent feature distributions of the two domains under the guidance of the prior distribution.
In such an indirect way, the distributions over the samples from the two domains will be constructed on a common feature space, i.e., the space of the prior.
arXiv Detail & Related papers (2020-06-23T05:33:54Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.