S4T: Source-free domain adaptation for semantic segmentation via
self-supervised selective self-training
- URL: http://arxiv.org/abs/2107.10140v1
- Date: Wed, 21 Jul 2021 15:18:01 GMT
- Title: S4T: Source-free domain adaptation for semantic segmentation via
self-supervised selective self-training
- Authors: Viraj Prabhu, Shivam Khare, Deeksha Kartik, Judy Hoffman
- Abstract summary: We focus on source-free domain adaptation for semantic segmentation, wherein a source model must adapt itself to a new target domain given only unlabeled target data.
We propose Self-Supervised Selective Self-Training (S4T), a source-free adaptation algorithm that first uses the model's pixel-level predictive consistency across diverse views of each target image along with model confidence to classify pixel predictions as either reliable or unreliable.
S4T matches or improves upon the state-of-the-art in source-free adaptation on 3 standard benchmarks for semantic segmentation within a single epoch of adaptation.
- Score: 14.086066389856173
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most modern approaches for domain adaptive semantic segmentation rely on
continued access to source data during adaptation, which may be infeasible due
to computational or privacy constraints. We focus on source-free domain
adaptation for semantic segmentation, wherein a source model must adapt itself
to a new target domain given only unlabeled target data. We propose
Self-Supervised Selective Self-Training (S4T), a source-free adaptation
algorithm that first uses the model's pixel-level predictive consistency across
diverse views of each target image along with model confidence to classify
pixel predictions as either reliable or unreliable. Next, the model is
self-trained, using predicted pseudolabels for reliable predictions and
pseudolabels inferred via a selective interpolation strategy for unreliable
ones. S4T matches or improves upon the state-of-the-art in source-free
adaptation on 3 standard benchmarks for semantic segmentation within a single
epoch of adaptation.
Related papers
- Cal-SFDA: Source-Free Domain-adaptive Semantic Segmentation with
Differentiable Expected Calibration Error [50.86671887712424]
The prevalence of domain adaptive semantic segmentation has prompted concerns regarding source domain data leakage.
To circumvent the requirement for source data, source-free domain adaptation has emerged as a viable solution.
We propose a novel calibration-guided source-free domain adaptive semantic segmentation framework.
arXiv Detail & Related papers (2023-08-06T03:28:34Z) - Contrastive Model Adaptation for Cross-Condition Robustness in Semantic
Segmentation [58.17907376475596]
We investigate normal-to-adverse condition model adaptation for semantic segmentation.
Our method -- CMA -- leverages such image pairs to learn condition-invariant features via contrastive learning.
We achieve state-of-the-art semantic segmentation performance for model adaptation on several normal-to-adverse adaptation benchmarks.
arXiv Detail & Related papers (2023-03-09T11:48:29Z) - Dual Moving Average Pseudo-Labeling for Source-Free Inductive Domain
Adaptation [45.024029784248825]
Unsupervised domain adaptation reduces the reliance on data annotation in deep learning by adapting knowledge from a source to a target domain.
For privacy and efficiency concerns, source-free domain adaptation extends unsupervised domain adaptation by adapting a pre-trained source model to an unlabeled target domain.
We propose a new semi-supervised fine-tuning method named Dual Moving Average Pseudo-Labeling (DMAPL) for source-free inductive domain adaptation.
arXiv Detail & Related papers (2022-12-15T23:20:13Z) - Source-Free Domain Adaptive Fundus Image Segmentation with Denoised
Pseudo-Labeling [56.98020855107174]
Domain adaptation typically requires to access source domain data to utilize their distribution information for domain alignment with the target data.
In many real-world scenarios, the source data may not be accessible during the model adaptation in the target domain due to privacy issue.
We present a novel denoised pseudo-labeling method for this problem, which effectively makes use of the source model and unlabeled target data.
arXiv Detail & Related papers (2021-09-19T06:38:21Z) - A Curriculum-style Self-training Approach for Source-Free Semantic Segmentation [91.13472029666312]
We propose a curriculum-style self-training approach for source-free domain adaptive semantic segmentation.
Our method yields state-of-the-art performance on source-free semantic segmentation tasks for both synthetic-to-real and adverse conditions.
arXiv Detail & Related papers (2021-06-22T10:21:39Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Domain Adaptation Using Class Similarity for Robust Speech Recognition [24.951852740214413]
This paper proposes a novel adaptation method for deep neural network (DNN) acoustic model using class similarity.
Experiments showed that our approach outperforms fine-tuning using one-hot labels on both accent and noise adaptation task.
arXiv Detail & Related papers (2020-11-05T12:26:43Z) - Learning from Scale-Invariant Examples for Domain Adaptation in Semantic
Segmentation [6.320141734801679]
We propose a novel approach of exploiting scale-invariance property of semantic segmentation model for self-supervised domain adaptation.
Our algorithm is based on a reasonable assumption that, in general, regardless of the size of the object and stuff (given context) the semantic labeling should be unchanged.
We show that this constraint is violated over the images of the target domain, and hence could be used to transfer labels in-between differently scaled patches.
arXiv Detail & Related papers (2020-07-28T19:40:45Z) - Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain
Adaptive Semantic Segmentation [49.295165476818866]
This paper focuses on the unsupervised domain adaptation of transferring the knowledge from the source domain to the target domain in the context of semantic segmentation.
Existing approaches usually regard the pseudo label as the ground truth to fully exploit the unlabeled target-domain data.
This paper proposes to explicitly estimate the prediction uncertainty during training to rectify the pseudo label learning.
arXiv Detail & Related papers (2020-03-08T12:37:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.