Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation
- URL: http://arxiv.org/abs/2012.02621v1
- Date: Fri, 4 Dec 2020 14:28:19 GMT
- Title: Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation
- Authors: Zhiyong Huang, Kekai Sheng, Weiming Dong, Xing Mei, Chongyang Ma,
Feiyue Huang, Dengwen Zhou, Changsheng Xu
- Abstract summary: Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
- Score: 76.41664929948607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised domain adaptation (SSDA) methods have demonstrated great
potential in large-scale image classification tasks when massive labeled data
are available in the source domain but very few labeled samples are provided in
the target domain. Existing solutions usually focus on feature alignment
between the two domains while paying little attention to the discrimination
capability of learned representations in the target domain. In this paper, we
present a novel and effective method, namely Effective Label Propagation (ELP),
to tackle this problem by using effective inter-domain and intra-domain
semantic information propagation. For inter-domain propagation, we propose a
new cycle discrepancy loss to encourage consistency of semantic information
between the two domains. For intra-domain propagation, we propose an effective
self-training strategy to mitigate the noises in pseudo-labeled target domain
data and improve the feature discriminability in the target domain. As a
general method, our ELP can be easily applied to various domain adaptation
approaches and can facilitate their feature discrimination in the target
domain. Experiments on Office-Home and DomainNet benchmarks show ELP
consistently improves the classification accuracy of mainstream SSDA methods by
2%~3%. Additionally, ELP also improves the performance of UDA methods as well
(81.5% vs 86.1%), based on UDA experiments on the VisDA-2017 benchmark. Our
source code and pre-trained models will be released soon.
Related papers
- Improving Domain Adaptation Through Class Aware Frequency Transformation [15.70058524548143]
Most of the Unsupervised Domain Adaptation (UDA) algorithms focus on reducing the global domain shift between labelled source and unlabelled target domains.
We propose a novel approach based on traditional image processing technique Class Aware Frequency Transformation (CAFT)
CAFT utilizes pseudo label based class consistent low-frequency swapping for improving the overall performance of the existing UDA algorithms.
arXiv Detail & Related papers (2024-07-28T18:16:41Z) - Joint Attention-Driven Domain Fusion and Noise-Tolerant Learning for
Multi-Source Domain Adaptation [2.734665397040629]
Multi-source Unsupervised Domain Adaptation transfers knowledge from multiple source domains with labeled data to an unlabeled target domain.
The distribution discrepancy between different domains and the noisy pseudo-labels in the target domain both lead to performance bottlenecks.
We propose an approach that integrates Attention-driven Domain fusion and Noise-Tolerant learning (ADNT) to address the two issues mentioned above.
arXiv Detail & Related papers (2022-08-05T01:08:41Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation [1.2691047660244335]
Unsupervised Domain Adaptation (UDA) aims to align the labeled source distribution with the unlabeled target distribution to obtain domain invariant predictive models.
We propose Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap.
CLDA achieves state-of-the-art results on all the above datasets.
arXiv Detail & Related papers (2021-06-30T20:23:19Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Re-energizing Domain Discriminator with Sample Relabeling for
Adversarial Domain Adaptation [88.86865069583149]
Unsupervised domain adaptation (UDA) methods exploit domain adversarial training to align the features to reduce domain gap.
In this work, we propose an efficient optimization strategy named Re-enforceable Adversarial Domain Adaptation (RADA)
RADA aims to re-energize the domain discriminator during the training by using dynamic domain labels.
arXiv Detail & Related papers (2021-03-22T08:32:55Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.