Confidence-based Visual Dispersal for Few-shot Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2309.15575v2
- Date: Fri, 29 Sep 2023 17:11:30 GMT
- Title: Confidence-based Visual Dispersal for Few-shot Unsupervised Domain
Adaptation
- Authors: Yizhe Xiong, Hui Chen, Zijia Lin, Sicheng Zhao, Guiguang Ding
- Abstract summary: Unsupervised domain adaptation aims to transfer knowledge from a fully-labeled source domain to an unlabeled target domain.
We propose a Confidence-based Visual Dispersal Transfer learning method (C-VisDiT) for FUDA.
We conduct extensive experiments on Office-31, Office-Home, VisDA-C, and DomainNet benchmark datasets and the results demonstrate that the proposed C-VisDiT significantly outperforms state-of-the-art FUDA methods.
- Score: 39.112032738643656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation aims to transfer knowledge from a
fully-labeled source domain to an unlabeled target domain. However, in
real-world scenarios, providing abundant labeled data even in the source domain
can be infeasible due to the difficulty and high expense of annotation. To
address this issue, recent works consider the Few-shot Unsupervised Domain
Adaptation (FUDA) where only a few source samples are labeled, and conduct
knowledge transfer via self-supervised learning methods. Yet existing methods
generally overlook that the sparse label setting hinders learning reliable
source knowledge for transfer. Additionally, the learning difficulty difference
in target samples is different but ignored, leaving hard target samples poorly
classified. To tackle both deficiencies, in this paper, we propose a novel
Confidence-based Visual Dispersal Transfer learning method (C-VisDiT) for FUDA.
Specifically, C-VisDiT consists of a cross-domain visual dispersal strategy
that transfers only high-confidence source knowledge for model adaptation and
an intra-domain visual dispersal strategy that guides the learning of hard
target samples with easy ones. We conduct extensive experiments on Office-31,
Office-Home, VisDA-C, and DomainNet benchmark datasets and the results
demonstrate that the proposed C-VisDiT significantly outperforms
state-of-the-art FUDA methods. Our code is available at
https://github.com/Bostoncake/C-VisDiT.
Related papers
- Unsupervised Adaptation of Polyp Segmentation Models via Coarse-to-Fine
Self-Supervision [16.027843524655516]
We study a practical problem of Source-Free Domain Adaptation (SFDA), which eliminates the reliance on annotated source data.
Current SFDA methods focus on extracting domain knowledge from the source-trained model but neglects the intrinsic structure of the target domain.
We propose a new SFDA framework, called Region-to-Pixel Adaptation Network(RPANet), which learns the region-level and pixel-level discriminative representations through coarse-to-fine self-supervision.
arXiv Detail & Related papers (2023-08-13T02:37:08Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Improving Transferability of Domain Adaptation Networks Through Domain
Alignment Layers [1.3766148734487902]
Multi-source unsupervised domain adaptation (MSDA) aims at learning a predictor for an unlabeled domain by assigning weak knowledge from a bag of source models.
We propose to embed Multi-Source version of DomaIn Alignment Layers (MS-DIAL) at different levels of the predictor.
Our approach can improve state-of-the-art MSDA methods, yielding relative gains of up to +30.64% on their classification accuracies.
arXiv Detail & Related papers (2021-09-06T18:41:19Z) - Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining
and Consistency [93.89773386634717]
Visual domain adaptation involves learning to classify images from a target visual domain using labels available in a different source domain.
We show that in the presence of a few target labels, simple techniques like self-supervision (via rotation prediction) and consistency regularization can be effective without any adversarial alignment to learn a good target classifier.
Our Pretraining and Consistency (PAC) approach, can achieve state of the art accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets.
arXiv Detail & Related papers (2021-01-29T18:40:17Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.