Context-Aware Pseudo-Label Refinement for Source-Free Domain Adaptive
Fundus Image Segmentation
- URL: http://arxiv.org/abs/2308.07731v1
- Date: Tue, 15 Aug 2023 12:11:33 GMT
- Title: Context-Aware Pseudo-Label Refinement for Source-Free Domain Adaptive
Fundus Image Segmentation
- Authors: Zheang Huai, Xinpeng Ding, Yi Li, and Xiaomeng Li
- Abstract summary: Source-free unsupervised domain adaptation (SF-UDA) aims at adapting a model trained on the source side to align the target distribution with only the source model and unlabeled target data.
We propose a context-aware pseudo-label refinement method for SF-UDA.
Experiments on cross-domain fundus images indicate that our approach yields the state-of-the-art results.
- Score: 15.175385504917125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the domain adaptation problem, source data may be unavailable to the
target client side due to privacy or intellectual property issues. Source-free
unsupervised domain adaptation (SF-UDA) aims at adapting a model trained on the
source side to align the target distribution with only the source model and
unlabeled target data. The source model usually produces noisy and
context-inconsistent pseudo-labels on the target domain, i.e., neighbouring
regions that have a similar visual appearance are annotated with different
pseudo-labels. This observation motivates us to refine pseudo-labels with
context relations. Another observation is that features of the same class tend
to form a cluster despite the domain gap, which implies context relations can
be readily calculated from feature distances. To this end, we propose a
context-aware pseudo-label refinement method for SF-UDA. Specifically, a
context-similarity learning module is developed to learn context relations.
Next, pseudo-label revision is designed utilizing the learned context
relations. Further, we propose calibrating the revised pseudo-labels to
compensate for wrong revision caused by inaccurate context relations.
Additionally, we adopt a pixel-level and class-level denoising scheme to select
reliable pseudo-labels for domain adaptation. Experiments on cross-domain
fundus images indicate that our approach yields the state-of-the-art results.
Code is available at https://github.com/xmed-lab/CPR.
Related papers
- De-Confusing Pseudo-Labels in Source-Free Domain Adaptation [14.954662088592762]
Source-free domain adaptation aims to adapt a source-trained model to an unlabeled target domain without access to the source data.
We introduce a novel noise-learning approach tailored to address noise distribution in domain adaptation settings.
arXiv Detail & Related papers (2024-01-03T10:07:11Z) - Overcoming Label Noise for Source-free Unsupervised Video Domain
Adaptation [39.71690595469969]
We present a self-training based source-free video domain adaptation approach.
We use the source pre-trained model to generate pseudo-labels for the target domain samples.
We further enhance the adaptation performance by implementing a teacher-student framework.
arXiv Detail & Related papers (2023-11-30T14:06:27Z) - Local-Global Pseudo-label Correction for Source-free Domain Adaptive
Medical Image Segmentation [5.466962214217334]
Domain shift is a commonly encountered issue in medical imaging solutions.
Concerns regarding patient privacy and potential degradation of image quality have led to an increased focus on source-free domain adaptation.
We propose a novel approach called the local-global pseudo-label correction (LGDA) method for source-free domain adaptive medical image segmentation.
arXiv Detail & Related papers (2023-08-28T05:29:59Z) - Contrastive Model Adaptation for Cross-Condition Robustness in Semantic
Segmentation [58.17907376475596]
We investigate normal-to-adverse condition model adaptation for semantic segmentation.
Our method -- CMA -- leverages such image pairs to learn condition-invariant features via contrastive learning.
We achieve state-of-the-art semantic segmentation performance for model adaptation on several normal-to-adverse adaptation benchmarks.
arXiv Detail & Related papers (2023-03-09T11:48:29Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - Source-Free Domain Adaptive Fundus Image Segmentation with Denoised
Pseudo-Labeling [56.98020855107174]
Domain adaptation typically requires to access source domain data to utilize their distribution information for domain alignment with the target data.
In many real-world scenarios, the source data may not be accessible during the model adaptation in the target domain due to privacy issue.
We present a novel denoised pseudo-labeling method for this problem, which effectively makes use of the source model and unlabeled target data.
arXiv Detail & Related papers (2021-09-19T06:38:21Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Learning from Scale-Invariant Examples for Domain Adaptation in Semantic
Segmentation [6.320141734801679]
We propose a novel approach of exploiting scale-invariance property of semantic segmentation model for self-supervised domain adaptation.
Our algorithm is based on a reasonable assumption that, in general, regardless of the size of the object and stuff (given context) the semantic labeling should be unchanged.
We show that this constraint is violated over the images of the target domain, and hence could be used to transfer labels in-between differently scaled patches.
arXiv Detail & Related papers (2020-07-28T19:40:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.