One-shot Weakly-Supervised Segmentation in Medical Images
- URL: http://arxiv.org/abs/2111.10773v1
- Date: Sun, 21 Nov 2021 09:14:13 GMT
- Title: One-shot Weakly-Supervised Segmentation in Medical Images
- Authors: Wenhui Lei, Qi Su, Ran Gu, Na Wang, Xinglong Liu, Guotai Wang, Xiaofan
Zhang, Shaoting Zhang
- Abstract summary: We present an innovative framework for 3D medical image segmentation with one-shot and weakly-supervised settings.
A propagation-reconstruction network is proposed to project scribbles from annotated volume to unlabeled 3D images.
A dual-level feature denoising module is designed to refine the scribbles based on anatomical- and pixel-level features.
- Score: 12.184590794655517
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks usually require accurate and a large number of
annotations to achieve outstanding performance in medical image segmentation.
One-shot segmentation and weakly-supervised learning are promising research
directions that lower labeling effort by learning a new class from only one
annotated image and utilizing coarse labels instead, respectively. Previous
works usually fail to leverage the anatomical structure and suffer from class
imbalance and low contrast problems. Hence, we present an innovative framework
for 3D medical image segmentation with one-shot and weakly-supervised settings.
Firstly a propagation-reconstruction network is proposed to project scribbles
from annotated volume to unlabeled 3D images based on the assumption that
anatomical patterns in different human bodies are similar. Then a dual-level
feature denoising module is designed to refine the scribbles based on
anatomical- and pixel-level features. After expanding the scribbles to pseudo
masks, we could train a segmentation model for the new class with the noisy
label training strategy. Experiments on one abdomen and one head-and-neck CT
dataset show the proposed method obtains significant improvement over the
state-of-the-art methods and performs robustly even under severe class
imbalance and low contrast.
Related papers
- OneSeg: Self-learning and One-shot Learning based Single-slice
Annotation for 3D Medical Image Segmentation [36.50258132379276]
We propose a self-learning and one-shot learning based framework for 3D medical image segmentation by annotating only one slice of each 3D image.
Our approach takes two steps: (1) self-learning of a reconstruction network to learn semantic correspondence among 2D slices within 3D images, and (2) representative selection of single slices for one-shot manual annotation.
Our new framework achieves comparable performance with less than 1% annotated data compared with fully supervised methods.
arXiv Detail & Related papers (2023-09-24T15:35:58Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Mixed-UNet: Refined Class Activation Mapping for Weakly-Supervised
Semantic Segmentation with Multi-scale Inference [28.409679398886304]
We develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase.
We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets.
arXiv Detail & Related papers (2022-05-06T08:37:02Z) - Anomaly Detection-Inspired Few-Shot Medical Image Segmentation Through
Self-Supervision With Supervoxels [23.021720656733088]
We propose a novel anomaly detection-inspired approach to few-shot medical image segmentation.
We use a single foreground prototype to compute anomaly scores for all query pixels.
The segmentation is then performed by thresholding these anomaly scores using a learned threshold.
arXiv Detail & Related papers (2022-03-03T22:36:39Z) - Weakly Supervised Volumetric Segmentation via Self-taught Shape
Denoising Model [27.013224147257198]
We propose a novel weakly-supervised segmentation strategy capable of better capturing 3D shape prior in both model prediction and learning.
Our main idea is to extract a self-taught shape representation by leveraging weak labels, and then integrate this representation into segmentation prediction for shape refinement.
arXiv Detail & Related papers (2021-04-27T10:03:45Z) - Cascaded Robust Learning at Imperfect Labels for Chest X-ray
Segmentation [61.09321488002978]
We present a novel cascaded robust learning framework for chest X-ray segmentation with imperfect annotation.
Our model consists of three independent network, which can effectively learn useful information from the peer networks.
Our methods could achieve a significant improvement on the accuracy in segmentation tasks compared to the previous methods.
arXiv Detail & Related papers (2021-04-05T15:50:16Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.