Source-Free Domain Adaptive Fundus Image Segmentation with
Class-Balanced Mean Teacher
- URL: http://arxiv.org/abs/2307.09973v1
- Date: Fri, 14 Jul 2023 09:26:19 GMT
- Title: Source-Free Domain Adaptive Fundus Image Segmentation with
Class-Balanced Mean Teacher
- Authors: Longxiang Tang, Kai Li, Chunming He, Yulun Zhang, Xiu Li
- Abstract summary: This paper studies source-free domain adaptive fundus image segmentation.
It aims to adapt a pretrained fundus segmentation model to a target domain using unlabeled images.
- Score: 37.72463382440212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies source-free domain adaptive fundus image segmentation
which aims to adapt a pretrained fundus segmentation model to a target domain
using unlabeled images. This is a challenging task because it is highly risky
to adapt a model only using unlabeled data. Most existing methods tackle this
task mainly by designing techniques to carefully generate pseudo labels from
the model's predictions and use the pseudo labels to train the model. While
often obtaining positive adaption effects, these methods suffer from two major
issues. First, they tend to be fairly unstable - incorrect pseudo labels
abruptly emerged may cause a catastrophic impact on the model. Second, they
fail to consider the severe class imbalance of fundus images where the
foreground (e.g., cup) region is usually very small. This paper aims to address
these two issues by proposing the Class-Balanced Mean Teacher (CBMT) model.
CBMT addresses the unstable issue by proposing a weak-strong augmented mean
teacher learning scheme where only the teacher model generates pseudo labels
from weakly augmented images to train a student model that takes strongly
augmented images as input. The teacher is updated as the moving average of the
instantly trained student, which could be noisy. This prevents the teacher
model from being abruptly impacted by incorrect pseudo-labels. For the class
imbalance issue, CBMT proposes a novel loss calibration approach to highlight
foreground classes according to global statistics. Experiments show that CBMT
well addresses these two issues and outperforms existing methods on multiple
benchmarks.
Related papers
- Focus on Your Target: A Dual Teacher-Student Framework for
Domain-adaptive Semantic Segmentation [210.46684938698485]
We study unsupervised domain adaptation (UDA) for semantic segmentation.
We find that, by decreasing/increasing the proportion of training samples from the target domain, the 'learning ability' is strengthened/weakened.
We propose a novel dual teacher-student (DTS) framework and equip it with a bidirectional learning strategy.
arXiv Detail & Related papers (2023-03-16T05:04:10Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Barely-Supervised Learning: Semi-Supervised Learning with very few
labeled images [16.905389887406894]
We analyze in depth the behavior of a state-of-the-art semi-supervised method, FixMatch, which relies on a weakly-augmented version of an image to obtain supervision signal.
We show that it frequently fails in barely-supervised scenarios, due to a lack of training signal when no pseudo-label can be predicted with high confidence.
We propose a method to leverage self-supervised methods that provides training signal in the absence of confident pseudo-labels.
arXiv Detail & Related papers (2021-12-22T16:29:10Z) - An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human
Pose Estimation [80.02124918255059]
Semi-supervised learning aims to boost the accuracy of a model by exploring unlabeled images.
We learn two networks to mutually teach each other.
The more reliable predictions on easy images in each network are used to teach the other network to learn about the corresponding hard images.
arXiv Detail & Related papers (2020-11-25T03:29:52Z) - Background Splitting: Finding Rare Classes in a Sea of Background [55.03789745276442]
We focus on the real-world problem of training accurate deep models for image classification of a small number of rare categories.
In these scenarios, almost all images belong to the background category in the dataset (>95% of the dataset is background)
We demonstrate that both standard fine-tuning approaches and state-of-the-art approaches for training on imbalanced datasets do not produce accurate deep models in the presence of this extreme imbalance.
arXiv Detail & Related papers (2020-08-28T23:05:15Z) - Temporal Self-Ensembling Teacher for Semi-Supervised Object Detection [9.64328205496046]
This paper focuses on Semi-supervisedd Object Detection (SSOD)
The teacher model serves a dual role as a teacher and a student.
The class imbalance issue in SSOD hinders an efficient knowledge transfer from teacher to student.
arXiv Detail & Related papers (2020-07-13T01:17:25Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z) - Extreme Consistency: Overcoming Annotation Scarcity and Domain Shifts [2.707399740070757]
Supervised learning has proved effective for medical image analysis.
It can utilize only the small labeled portion of data.
It fails to leverage the large amounts of unlabeled data that is often available in medical image datasets.
arXiv Detail & Related papers (2020-04-15T15:32:01Z) - Continual Local Replacement for Few-shot Learning [13.956960291580938]
The goal of few-shot learning is to learn a model that can recognize novel classes based on one or few training data.
It is challenging mainly due to two aspects: (1) it lacks good feature representation of novel classes; (2) a few of labeled data could not accurately represent the true data distribution.
A novel continual local replacement strategy is proposed to address the data deficiency problem.
arXiv Detail & Related papers (2020-01-23T04:26:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.