Bootstrapping Semi-supervised Medical Image Segmentation with
Anatomical-aware Contrastive Distillation
- URL: http://arxiv.org/abs/2206.02307v1
- Date: Mon, 6 Jun 2022 01:30:03 GMT
- Title: Bootstrapping Semi-supervised Medical Image Segmentation with
Anatomical-aware Contrastive Distillation
- Authors: Chenyu You, Weicheng Dai, Lawrence Staib, James S. Duncan
- Abstract summary: We present ACTION, an Anatomical-aware ConTrastive dIstillatiON framework, for semi-supervised medical image segmentation.
We first develop an iterative contrastive distillation algorithm by softly labeling the negatives rather than binary supervision between positive and negative pairs.
We also capture more semantically similar features from the randomly chosen negative set compared to the positives to enforce the diversity of the sampled data.
- Score: 10.877450596327407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contrastive learning has shown great promise over annotation scarcity
problems in the context of medical image segmentation. Existing approaches
typically assume a balanced class distribution for both labeled and unlabeled
medical images. However, medical image data in reality is commonly imbalanced
(i.e., multi-class label imbalance), which naturally yields blurry contours and
usually incorrectly labels rare objects. Moreover, it remains unclear whether
all negative samples are equally negative. In this work, we present ACTION, an
Anatomical-aware ConTrastive dIstillatiON framework, for semi-supervised
medical image segmentation. Specifically, we first develop an iterative
contrastive distillation algorithm by softly labeling the negatives rather than
binary supervision between positive and negative pairs. We also capture more
semantically similar features from the randomly chosen negative set compared to
the positives to enforce the diversity of the sampled data. Second, we raise a
more important question: Can we really handle imbalanced samples to yield
better performance? Hence, the key innovation in ACTION is to learn global
semantic relationship across the entire dataset and local anatomical features
among the neighbouring pixels with minimal additional memory footprint. During
the training, we introduce anatomical contrast by actively sampling a sparse
set of hard negative pixels, which can generate smoother segmentation
boundaries and more accurate predictions. Extensive experiments across two
benchmark datasets and different unlabeled settings show that ACTION performs
comparable or better than the current state-of-the-art supervised and
semi-supervised methods. Our code and models will be publicly available.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.