Cross-Region Domain Adaptation for Class-level Alignment
- URL: http://arxiv.org/abs/2109.06422v1
- Date: Tue, 14 Sep 2021 04:13:35 GMT
- Title: Cross-Region Domain Adaptation for Class-level Alignment
- Authors: Zhijie Wang, Xing Liu, Masanori Suganuma, Takayuki Okatani
- Abstract summary: We propose a method that applies adversarial training to align two feature distributions in the target domain.
It uses a self-training framework to split the image into two regions, which form two distributions to align in the feature space.
We term this approach cross-region adaptation (CRA) to distinguish from the previous methods of aligning different domain distributions.
- Score: 32.586107376036075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic segmentation requires a lot of training data, which necessitates
costly annotation. There have been many studies on unsupervised domain
adaptation (UDA) from one domain to another, e.g., from computer graphics to
real images. However, there is still a gap in accuracy between UDA and
supervised training on native domain data. It is arguably attributable to
class-level misalignment between the source and target domain data. To cope
with this, we propose a method that applies adversarial training to align two
feature distributions in the target domain. It uses a self-training framework
to split the image into two regions (i.e., trusted and untrusted), which form
two distributions to align in the feature space. We term this approach
cross-region adaptation (CRA) to distinguish from the previous methods of
aligning different domain distributions, which we call cross-domain adaptation
(CDA). CRA can be applied after any CDA method. Experimental results show that
this always improves the accuracy of the combined CDA method, having updated
the state-of-the-art.
Related papers
- Delving into the Continuous Domain Adaptation [12.906272389564593]
Existing domain adaptation methods assume that domain discrepancies are caused by a few discrete attributes and variations.
We argue that this is not realistic as it is implausible to define the real-world datasets using a few discrete attributes.
We propose to investigate a new problem namely the Continuous Domain Adaptation.
arXiv Detail & Related papers (2022-08-28T02:32:25Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Unsupervised domain adaptation via double classifiers based on high
confidence pseudo label [8.132250810529873]
Unsupervised domain adaptation (UDA) aims to solve the problem of knowledge transfer from labeled source domain to unlabeled target domain.
Many domain adaptation (DA) methods use centroid to align the local distribution of different domains, that is, to align different classes.
This work rethinks what is the alignment between different domains, and studies how to achieve the real alignment between different domains.
arXiv Detail & Related papers (2021-05-11T00:51:31Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Deep Co-Training with Task Decomposition for Semi-Supervised Domain
Adaptation [80.55236691733506]
Semi-supervised domain adaptation (SSDA) aims to adapt models trained from a labeled source domain to a different but related target domain.
We propose to explicitly decompose the SSDA task into two sub-tasks: a semi-supervised learning (SSL) task in the target domain and an unsupervised domain adaptation (UDA) task across domains.
arXiv Detail & Related papers (2020-07-24T17:57:54Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.