Domain Conditional Predictors for Domain Adaptation
- URL: http://arxiv.org/abs/2106.13899v1
- Date: Fri, 25 Jun 2021 22:15:54 GMT
- Title: Domain Conditional Predictors for Domain Adaptation
- Authors: Joao Monteiro, Xavier Gibert, Jianqiao Feng, Vincent Dumoulin,
Dar-Shyang Lee
- Abstract summary: We consider a conditional modeling approach in which predictions, in addition to being dependent on the input data, use information relative to the underlying data-generating distribution.
We argue that such an approach is more generally applicable than current domain adaptation methods.
- Score: 3.951376400628575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning guarantees often rely on assumptions of i.i.d. data, which will
likely be violated in practice once predictors are deployed to perform
real-world tasks. Domain adaptation approaches thus appeared as a useful
framework yielding extra flexibility in that distinct train and test data
distributions are supported, provided that other assumptions are satisfied such
as covariate shift, which expects the conditional distributions over labels to
be independent of the underlying data distribution. Several approaches were
introduced in order to induce generalization across varying train and test data
sources, and those often rely on the general idea of domain-invariance, in such
a way that the data-generating distributions are to be disregarded by the
prediction model. In this contribution, we tackle the problem of generalizing
across data sources by approaching it from the opposite direction: we consider
a conditional modeling approach in which predictions, in addition to being
dependent on the input data, use information relative to the underlying
data-generating distribution. For instance, the model has an explicit mechanism
to adapt to changing environments and/or new data sources. We argue that such
an approach is more generally applicable than current domain adaptation methods
since it does not require extra assumptions such as covariate shift and further
yields simpler training algorithms that avoid a common source of training
instabilities caused by minimax formulations, often employed in
domain-invariant methods.
Related papers
- Causality-oriented robustness: exploiting general additive interventions [3.871660145364189]
In this paper, we focus on causality-oriented robustness and propose Distributional Robustness via Invariant Gradients (DRIG)
In a linear setting, we prove that DRIG yields predictions that are robust among a data-dependent class of distribution shifts.
We extend our approach to the semi-supervised domain adaptation setting to further improve prediction performance.
arXiv Detail & Related papers (2023-07-18T16:22:50Z) - Predicting Out-of-Domain Generalization with Neighborhood Invariance [59.05399533508682]
We propose a measure of a classifier's output invariance in a local transformation neighborhood.
Our measure is simple to calculate, does not depend on the test point's true label, and can be applied even in out-of-domain (OOD) settings.
In experiments on benchmarks in image classification, sentiment analysis, and natural language inference, we demonstrate a strong and robust correlation between our measure and actual OOD generalization.
arXiv Detail & Related papers (2022-07-05T14:55:16Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Self-balanced Learning For Domain Generalization [64.99791119112503]
Domain generalization aims to learn a prediction model on multi-domain source data such that the model can generalize to a target domain with unknown statistics.
Most existing approaches have been developed under the assumption that the source data is well-balanced in terms of both domain and class.
We propose a self-balanced domain generalization framework that adaptively learns the weights of losses to alleviate the bias caused by different distributions of the multi-domain source data.
arXiv Detail & Related papers (2021-08-31T03:17:54Z) - Adaptive Conformal Inference Under Distribution Shift [0.0]
We develop methods for forming prediction sets in an online setting where the data generating distribution is allowed to vary over time in an unknown fashion.
Our framework builds on ideas from conformal inference to provide a general wrapper that can be combined with any black box method.
We test our method, adaptive conformal inference, on two real world datasets and find that its predictions are robust to visible and significant distribution shifts.
arXiv Detail & Related papers (2021-06-01T01:37:32Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - On generalization in moment-based domain adaptation [1.8047694351309205]
Domain adaptation algorithms are designed to minimize the misclassification risk of a discriminative model for a target domain with little training data.
Standard approaches measure the adaptation discrepancy based on distance measures between the empirical probability distributions in the source and target domain.
arXiv Detail & Related papers (2020-02-19T16:05:27Z) - Few-shot Domain Adaptation by Causal Mechanism Transfer [107.08605582020866]
We study few-shot supervised domain adaptation (DA) for regression problems, where only a few labeled target domain data and many labeled source domain data are available.
Many of the current DA methods base their transfer assumptions on either parametrized distribution shift or apparent distribution similarities.
We propose mechanism transfer, a meta-distributional scenario in which a data generating mechanism is invariant among domains.
arXiv Detail & Related papers (2020-02-10T02:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.