Efficient Pre-trained Features and Recurrent Pseudo-Labeling
inUnsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2104.13486v1
- Date: Tue, 27 Apr 2021 21:35:28 GMT
- Title: Efficient Pre-trained Features and Recurrent Pseudo-Labeling
inUnsupervised Domain Adaptation
- Authors: Youshan Zhang and Brian D. Davison
- Abstract summary: We show how to efficiently opt for the best pre-trained features from seventeen well-known ImageNet models in unsupervised DA problems.
We propose a recurrent pseudo-labeling model using the best pre-trained features (termed PRPL) to improve classification performance.
- Score: 6.942003070153651
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Domain adaptation (DA) mitigates the domain shift problem when transferring
knowledge from one annotated domain to another similar but different unlabeled
domain. However, existing models often utilize one of the ImageNet models as
the backbone without exploring others, and fine-tuning or retraining the
backbone ImageNet model is also time-consuming. Moreover, pseudo-labeling has
been used to improve the performance in the target domain, while how to
generate confident pseudo labels and explicitly align domain distributions has
not been well addressed. In this paper, we show how to efficiently opt for the
best pre-trained features from seventeen well-known ImageNet models in
unsupervised DA problems. In addition, we propose a recurrent pseudo-labeling
model using the best pre-trained features (termed PRPL) to improve
classification performance. To show the effectiveness of PRPL, we evaluate it
on three benchmark datasets, Office+Caltech-10, Office-31, and Office-Home.
Extensive experiments show that our model reduces computation time and boosts
the mean accuracy to 98.1%, 92.4%, and 81.2%, respectively, substantially
outperforming the state of the art.
Related papers
- Hybrid diffusion models: combining supervised and generative pretraining for label-efficient fine-tuning of segmentation models [55.2480439325792]
We propose a new pretext task, which is to perform simultaneously image denoising and mask prediction on the first domain.
We show that fine-tuning a model pretrained using this approach leads to better results than fine-tuning a similar model trained using either supervised or unsupervised pretraining.
arXiv Detail & Related papers (2024-08-06T20:19:06Z) - Unsupervised Domain Adaptation for Semantic Segmentation with Pseudo
Label Self-Refinement [9.69089112870202]
We propose an auxiliary pseudo-label refinement network (PRN) for online refining of the pseudo labels and also localizing the pixels whose predicted labels are likely to be noisy.
We evaluate our approach on benchmark datasets with three different domain shifts, and our approach consistently performs significantly better than the previous state-of-the-art methods.
arXiv Detail & Related papers (2023-10-25T20:31:07Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - Domain-knowledge Inspired Pseudo Supervision (DIPS) for Unsupervised
Image-to-Image Translation Models to Support Cross-Domain Classification [16.4151067682813]
This paper introduces a new method called Domain-knowledge Inspired Pseudo Supervision (DIPS)
DIPS uses domain-informed Gaussian Mixture Models to generate pseudo annotations to enable the use of traditional supervised metrics.
It proves its effectiveness by outperforming various GAN evaluation metrics, including FID, when selecting the optimal saved checkpoint model.
arXiv Detail & Related papers (2023-03-18T02:42:18Z) - Stacking Ensemble Learning in Deep Domain Adaptation for Ophthalmic
Image Classification [61.656149405657246]
Domain adaptation is effective in image classification tasks where obtaining sufficient label data is challenging.
We propose a novel method, named SELDA, for stacking ensemble learning via extending three domain adaptation methods.
The experimental results using Age-Related Eye Disease Study (AREDS) benchmark ophthalmic dataset demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2022-09-27T14:19:00Z) - Masked Unsupervised Self-training for Zero-shot Image Classification [98.23094305347709]
Masked Unsupervised Self-Training (MUST) is a new approach which leverages two different and complimentary sources of supervision: pseudo-labels and raw images.
MUST improves upon CLIP by a large margin and narrows the performance gap between unsupervised and supervised classification.
arXiv Detail & Related papers (2022-06-07T02:03:06Z) - Source-Free Domain Adaptive Fundus Image Segmentation with Denoised
Pseudo-Labeling [56.98020855107174]
Domain adaptation typically requires to access source domain data to utilize their distribution information for domain alignment with the target data.
In many real-world scenarios, the source data may not be accessible during the model adaptation in the target domain due to privacy issue.
We present a novel denoised pseudo-labeling method for this problem, which effectively makes use of the source model and unlabeled target data.
arXiv Detail & Related papers (2021-09-19T06:38:21Z) - To be Critical: Self-Calibrated Weakly Supervised Learning for Salient
Object Detection [95.21700830273221]
Weakly-supervised salient object detection (WSOD) aims to develop saliency models using image-level annotations.
We propose a self-calibrated training strategy by explicitly establishing a mutual calibration loop between pseudo labels and network predictions.
We prove that even a much smaller dataset with well-matched annotations can facilitate models to achieve better performance as well as generalizability.
arXiv Detail & Related papers (2021-09-04T02:45:22Z) - Data Augmentation with norm-VAE for Unsupervised Domain Adaptation [26.889303784575805]
We learn a unified classifier for both domains within a high-dimensional homogeneous feature space without explicit domain adaptation.
We employ the effective Selective Pseudo-Labelling (SPL) techniques to take advantage of the unlabelled samples in the target domain.
We propose a novel generative model norm-VAE to generate synthetic features for the target domain as a data augmentation strategy.
arXiv Detail & Related papers (2020-12-01T21:41:08Z) - Learning from Scale-Invariant Examples for Domain Adaptation in Semantic
Segmentation [6.320141734801679]
We propose a novel approach of exploiting scale-invariance property of semantic segmentation model for self-supervised domain adaptation.
Our algorithm is based on a reasonable assumption that, in general, regardless of the size of the object and stuff (given context) the semantic labeling should be unchanged.
We show that this constraint is violated over the images of the target domain, and hence could be used to transfer labels in-between differently scaled patches.
arXiv Detail & Related papers (2020-07-28T19:40:45Z) - PERL: Pivot-based Domain Adaptation for Pre-trained Deep Contextualized
Embedding Models [20.62501560076402]
PERL: A representation learning model that extends contextualized word embedding models such as BERT with pivot-based fine-tuning.
PerL outperforms strong baselines across 22 sentiment classification domain adaptation setups.
It yields effective reduced-size models and increases model stability.
arXiv Detail & Related papers (2020-06-16T11:14:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.