Adversarial Bi-Regressor Network for Domain Adaptive Regression
- URL: http://arxiv.org/abs/2209.09943v2
- Date: Wed, 17 Jul 2024 18:11:10 GMT
- Title: Adversarial Bi-Regressor Network for Domain Adaptive Regression
- Authors: Haifeng Xia, Pu Perry Wang, Toshiaki Koike-Akino, Ye Wang, Philip Orlik, Zhengming Ding,
- Abstract summary: It is essential to learn a cross-domain regressor to mitigate the domain shift.
This paper proposes a novel method Adversarial Bi-Regressor Network (ABRNet) to seek more effective cross-domain regression model.
- Score: 52.5168835502987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation (DA) aims to transfer the knowledge of a well-labeled source domain to facilitate unlabeled target learning. When turning to specific tasks such as indoor (Wi-Fi) localization, it is essential to learn a cross-domain regressor to mitigate the domain shift. This paper proposes a novel method Adversarial Bi-Regressor Network (ABRNet) to seek more effective cross-domain regression model. Specifically, a discrepant bi-regressor architecture is developed to maximize the difference of bi-regressor to discover uncertain target instances far from the source distribution, and then an adversarial training mechanism is adopted between feature extractor and dual regressors to produce domain-invariant representations. To further bridge the large domain gap, a domain-specific augmentation module is designed to synthesize two source-similar and target-similar intermediate domains to gradually eliminate the original domain mismatch. The empirical studies on two cross-domain regressive benchmarks illustrate the power of our method on solving the domain adaptive regression (DAR) problem.
Related papers
- Contrastive Adversarial Training for Unsupervised Domain Adaptation [2.432037584128226]
Domain adversarial training has been successfully adopted for various domain adaptation tasks.
Large models make adversarial training being easily biased towards source domain and hardly adapted to target domain.
We propose contrastive adversarial training (CAT) approach that leverages the labeled source domain samples to reinforce and regulate the feature generation for target domain.
arXiv Detail & Related papers (2024-07-17T17:59:21Z) - Meta-causal Learning for Single Domain Generalization [102.53303707563612]
Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains)
Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains.
We propose a new learning paradigm, namely simulate-analyze-reduce, which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation.
arXiv Detail & Related papers (2023-04-07T15:46:38Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID [58.46907388691056]
We argue that the bridging between the source and target domains can be utilized to tackle the UDA re-ID task.
We propose an Intermediate Domain Module (IDM) to generate intermediate domains' representations on-the-fly.
Our proposed method outperforms the state-of-the-arts by a large margin in all the common UDA re-ID tasks.
arXiv Detail & Related papers (2021-08-05T07:19:46Z) - Adversarial Consistent Learning on Partial Domain Adaptation of
PlantCLEF 2020 Challenge [26.016647703500883]
We develop adversarial consistent learning ($ACL$) in a unified deep architecture for partial domain adaptation.
It consists of source domain classification loss, adversarial learning loss, and feature consistency loss.
We find the shared categories of two domains via down-weighting the irrelevant categories in the source domain.
arXiv Detail & Related papers (2020-09-19T19:57:41Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Unsupervised Domain Adaptation with Progressive Domain Augmentation [34.887690018011675]
We propose a novel unsupervised domain adaptation method based on progressive domain augmentation.
The proposed method generates virtual intermediate domains via domain, progressively augments the source domain and bridges the source-target domain divergence.
We conduct experiments on multiple domain adaptation tasks and the results shows the proposed method achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-04-03T18:45:39Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.