Bootstrapping Semi-supervised Medical Image Segmentation with
Anatomical-aware Contrastive Distillation
- URL: http://arxiv.org/abs/2206.02307v1
- Date: Mon, 6 Jun 2022 01:30:03 GMT
- Title: Bootstrapping Semi-supervised Medical Image Segmentation with
Anatomical-aware Contrastive Distillation
- Authors: Chenyu You, Weicheng Dai, Lawrence Staib, James S. Duncan
- Abstract summary: We present ACTION, an Anatomical-aware ConTrastive dIstillatiON framework, for semi-supervised medical image segmentation.
We first develop an iterative contrastive distillation algorithm by softly labeling the negatives rather than binary supervision between positive and negative pairs.
We also capture more semantically similar features from the randomly chosen negative set compared to the positives to enforce the diversity of the sampled data.
- Score: 10.877450596327407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contrastive learning has shown great promise over annotation scarcity
problems in the context of medical image segmentation. Existing approaches
typically assume a balanced class distribution for both labeled and unlabeled
medical images. However, medical image data in reality is commonly imbalanced
(i.e., multi-class label imbalance), which naturally yields blurry contours and
usually incorrectly labels rare objects. Moreover, it remains unclear whether
all negative samples are equally negative. In this work, we present ACTION, an
Anatomical-aware ConTrastive dIstillatiON framework, for semi-supervised
medical image segmentation. Specifically, we first develop an iterative
contrastive distillation algorithm by softly labeling the negatives rather than
binary supervision between positive and negative pairs. We also capture more
semantically similar features from the randomly chosen negative set compared to
the positives to enforce the diversity of the sampled data. Second, we raise a
more important question: Can we really handle imbalanced samples to yield
better performance? Hence, the key innovation in ACTION is to learn global
semantic relationship across the entire dataset and local anatomical features
among the neighbouring pixels with minimal additional memory footprint. During
the training, we introduce anatomical contrast by actively sampling a sparse
set of hard negative pixels, which can generate smoother segmentation
boundaries and more accurate predictions. Extensive experiments across two
benchmark datasets and different unlabeled settings show that ACTION performs
comparable or better than the current state-of-the-art supervised and
semi-supervised methods. Our code and models will be publicly available.
Related papers
- Sample-Specific Debiasing for Better Image-Text Models [6.301766237907306]
Self-supervised representation learning on image-text data facilitates crucial medical applications, such as image classification, visual grounding, and cross-modal retrieval.
One common approach involves contrasting semantically similar (positive) and dissimilar (negative) pairs of data points.
Drawing negative samples uniformly from the training data set introduces false negatives, i.e., samples that are treated as dissimilar but belong to the same class.
In healthcare data, the underlying class distribution is nonuniform, implying that false negatives occur at a highly variable rate.
arXiv Detail & Related papers (2023-04-25T22:23:41Z) - ACTION++: Improving Semi-supervised Medical Image Segmentation with
Adaptive Anatomical Contrast [10.259713750306458]
We present ACTION++, an improved contrastive learning framework with adaptive anatomical contrast for semi-supervised medical segmentation.
We argue that blindly adopting a constant temperature $tau$ in the contrastive loss on long-tailed medical data is not optimal.
We show that ACTION++ achieves state-of-the-art across two semi-supervised settings.
arXiv Detail & Related papers (2023-04-05T18:33:18Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Min-Max Similarity: A Contrastive Learning Based Semi-Supervised
Learning Network for Surgical Tools Segmentation [0.0]
We propose a semi-supervised segmentation network based on contrastive learning.
In contrast to the previous state-of-the-art, we introduce a contrastive learning form of dual-view training.
Our proposed method outperforms state-of-the-art semi-supervised and fully supervised segmentation algorithms consistently.
arXiv Detail & Related papers (2022-03-29T01:40:26Z) - Semi-supervised Semantic Segmentation with Directional Context-aware
Consistency [66.49995436833667]
We focus on the semi-supervised segmentation problem where only a small set of labeled data is provided with a much larger collection of totally unlabeled images.
A preferred high-level representation should capture the contextual information while not losing self-awareness.
We present the Directional Contrastive Loss (DC Loss) to accomplish the consistency in a pixel-to-pixel manner.
arXiv Detail & Related papers (2021-06-27T03:42:40Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z) - Delving into Inter-Image Invariance for Unsupervised Visual
Representations [108.33534231219464]
We present a study to better understand the role of inter-image invariance learning.
Online labels converge faster than offline labels.
Semi-hard negative samples are more reliable and unbiased than hard negative samples.
arXiv Detail & Related papers (2020-08-26T17:44:23Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.