Extreme Consistency: Overcoming Annotation Scarcity and Domain Shifts
- URL: http://arxiv.org/abs/2004.11966v1
- Date: Wed, 15 Apr 2020 15:32:01 GMT
- Title: Extreme Consistency: Overcoming Annotation Scarcity and Domain Shifts
- Authors: Gaurav Fotedar, Nima Tajbakhsh, Shilpa Ananth, and Xiaowei Ding
- Abstract summary: Supervised learning has proved effective for medical image analysis.
It can utilize only the small labeled portion of data.
It fails to leverage the large amounts of unlabeled data that is often available in medical image datasets.
- Score: 2.707399740070757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised learning has proved effective for medical image analysis. However,
it can utilize only the small labeled portion of data; it fails to leverage the
large amounts of unlabeled data that is often available in medical image
datasets. Supervised models are further handicapped by domain shifts, when the
labeled dataset, despite being large enough, fails to cover different protocols
or ethnicities. In this paper, we introduce \emph{extreme consistency}, which
overcomes the above limitations, by maximally leveraging unlabeled data from
the same or a different domain in a teacher-student semi-supervised paradigm.
Extreme consistency is the process of sending an extreme transformation of a
given image to the student network and then constraining its prediction to be
consistent with the teacher network's prediction for the untransformed image.
The extreme nature of our consistency loss distinguishes our method from
related works that yield suboptimal performance by exercising only mild
prediction consistency. Our method is 1) auto-didactic, as it requires no extra
expert annotations; 2) versatile, as it handles both domain shift and limited
annotation problems; 3) generic, as it is readily applicable to classification,
segmentation, and detection tasks; and 4) simple to implement, as it requires
no adversarial training. We evaluate our method for the tasks of lesion and
retinal vessel segmentation in skin and fundus images. Our experiments
demonstrate a significant performance gain over both modern supervised networks
and recent semi-supervised models. This performance is attributed to the strong
regularization enforced by extreme consistency, which enables the student
network to learn how to handle extreme variants of both labeled and unlabeled
images. This enhances the network's ability to tackle the inevitable same- and
cross-domain data variability during inference.
Related papers
- Cross-head mutual Mean-Teaching for semi-supervised medical image
segmentation [6.738522094694818]
Semi-supervised medical image segmentation (SSMIS) has witnessed substantial advancements by leveraging limited labeled data and abundant unlabeled data.
Existing state-of-the-art (SOTA) methods encounter challenges in accurately predicting labels for the unlabeled data.
We propose a novel Cross-head mutual mean-teaching Network (CMMT-Net) incorporated strong-weak data augmentation.
arXiv Detail & Related papers (2023-10-08T09:13:04Z) - An End-to-End Framework For Universal Lesion Detection With Missing
Annotations [24.902835211573628]
We present a novel end-to-end framework for mining unlabeled lesions while simultaneously training the detector.
Our framework follows the teacher-student paradigm. High-confidence predictions are combined with partially-labeled ground truth for training the student model.
arXiv Detail & Related papers (2023-03-27T09:16:10Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - MIPR:Automatic Annotation of Medical Images with Pixel Rearrangement [7.39560318487728]
We pro?pose a novel approach to solve the lack of annotated data from another angle, called medical image pixel rearrangement (short in MIPR)
The MIPR combines image-editing and pseudo-label technology to obtain labeled data.
Experiments on the ISIC18 show that the effect of the data annotated by our method for segmentation task is is equal to or even better than that of doctors annotations.
arXiv Detail & Related papers (2022-04-22T05:54:14Z) - Adversarial Dual-Student with Differentiable Spatial Warping for
Semi-Supervised Semantic Segmentation [70.2166826794421]
We propose a differentiable geometric warping to conduct unsupervised data augmentation.
We also propose a novel adversarial dual-student framework to improve the Mean-Teacher.
Our solution significantly improves the performance and state-of-the-art results are achieved on both datasets.
arXiv Detail & Related papers (2022-03-05T17:36:17Z) - Semi-weakly Supervised Contrastive Representation Learning for Retinal
Fundus Images [0.2538209532048867]
We propose a semi-weakly supervised contrastive learning framework for representation learning using semi-weakly annotated images.
We empirically validate the transfer learning performance of SWCL on seven public retinal fundus datasets.
arXiv Detail & Related papers (2021-08-04T15:50:09Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Delving into Inter-Image Invariance for Unsupervised Visual
Representations [108.33534231219464]
We present a study to better understand the role of inter-image invariance learning.
Online labels converge faster than offline labels.
Semi-hard negative samples are more reliable and unbiased than hard negative samples.
arXiv Detail & Related papers (2020-08-26T17:44:23Z) - Graph Neural Networks for UnsupervisedDomain Adaptation of
Histopathological ImageAnalytics [22.04114134677181]
We present a novel method for the unsupervised domain adaptation for histological image analysis.
It is based on a backbone for embedding images into a feature space, and a graph neural layer for propa-gating the supervision signals of images with labels.
In experiments, our methodachieves state-of-the-art performance on four public datasets.
arXiv Detail & Related papers (2020-08-21T04:53:44Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z) - Improving Semantic Segmentation via Self-Training [75.07114899941095]
We show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm.
We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data.
Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets.
arXiv Detail & Related papers (2020-04-30T17:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.