Adversarial Weighting for Domain Adaptation in Regression
- URL: http://arxiv.org/abs/2006.08251v4
- Date: Wed, 15 Sep 2021 12:10:33 GMT
- Title: Adversarial Weighting for Domain Adaptation in Regression
- Authors: Antoine de Mathelin, Guillaume Richard, Francois Deheeger, Mathilde
Mougeot, Nicolas Vayatis
- Abstract summary: We present a novel instance-based approach to handle regression tasks in the context of supervised domain adaptation.
We develop an adversarial network algorithm which learns both the source weighting scheme and the task in one feed-forward gradient descent.
- Score: 4.34858896385326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel instance-based approach to handle regression tasks in the
context of supervised domain adaptation under an assumption of covariate shift.
The approach developed in this paper is based on the assumption that the task
on the target domain can be efficiently learned by adequately reweighting the
source instances during training phase. We introduce a novel formulation of the
optimization objective for domain adaptation which relies on a discrepancy
distance characterizing the difference between domains according to a specific
task and a class of hypotheses. To solve this problem, we develop an
adversarial network algorithm which learns both the source weighting scheme and
the task in one feed-forward gradient descent. We provide numerical evidence of
the relevance of the method on public data sets for regression domain
adaptation through reproducible experiments.
Related papers
- Stratified Domain Adaptation: A Progressive Self-Training Approach for Scene Text Recognition [1.2878987353423252]
Unsupervised domain adaptation (UDA) has become increasingly prevalent in scene text recognition (STR)
We introduce the Stratified Domain Adaptation (StrDA) approach, which examines the gradual escalation of the domain gap for the learning process.
We propose a novel method for employing domain discriminators to estimate the out-of-distribution and domain discriminative levels of data samples.
arXiv Detail & Related papers (2024-10-13T16:40:48Z) - Improve Cross-domain Mixed Sampling with Guidance Training for Adaptive Segmentation [9.875170018805768]
Unsupervised Domain Adaptation (UDA) endeavors to adjust models trained on a source domain to perform well on a target domain without requiring additional annotations.
We propose a novel auxiliary task called Guidance Training.
This task facilitates the effective utilization of cross-domain mixed sampling techniques while mitigating distribution shifts from the real world.
We demonstrate the efficacy of our approach by integrating it with existing methods, consistently improving performance.
arXiv Detail & Related papers (2024-03-22T07:12:48Z) - Adversarial Bi-Regressor Network for Domain Adaptive Regression [52.5168835502987]
It is essential to learn a cross-domain regressor to mitigate the domain shift.
This paper proposes a novel method Adversarial Bi-Regressor Network (ABRNet) to seek more effective cross-domain regression model.
arXiv Detail & Related papers (2022-09-20T18:38:28Z) - Domain Adaptation from Scratch [24.612696638386623]
We present a new learning setup, domain adaptation from scratch'', which we believe to be crucial for extending the reach of NLP to sensitive domains.
In this setup, we aim to efficiently annotate data from a set of source domains such that the trained model performs well on a sensitive target domain.
Our study compares several approaches for this challenging setup, ranging from data selection and domain adaptation algorithms to active learning paradigms.
arXiv Detail & Related papers (2022-09-02T05:55:09Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Domain Adaptation with Incomplete Target Domains [61.68950959231601]
We propose an Incomplete Data Imputation based Adversarial Network (IDIAN) model to address this new domain adaptation challenge.
In the proposed model, we design a data imputation module to fill the missing feature values based on the partial observations in the target domain.
We conduct experiments on both cross-domain benchmark tasks and a real world adaptation task with imperfect target domains.
arXiv Detail & Related papers (2020-12-03T00:07:40Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.