Orthogonal Annotation Benefits Barely-supervised Medical Image
Segmentation
- URL: http://arxiv.org/abs/2303.13090v1
- Date: Thu, 23 Mar 2023 08:10:25 GMT
- Title: Orthogonal Annotation Benefits Barely-supervised Medical Image
Segmentation
- Authors: Heng Cai, Shumeng Li, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao
- Abstract summary: Recent trends in semi-supervised learning have boosted the performance of 3D semi-supervised medical image segmentation.
These views and the intrinsic similarity among adjacent 3D slices inspire us to develop a novel annotation way.
We propose a dual-network paradigm named Dense-Sparse Co-training (DeSCO) that exploits dense pseudo labels in early stage and sparse labels in later stage.
- Score: 24.506059129303424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent trends in semi-supervised learning have significantly boosted the
performance of 3D semi-supervised medical image segmentation. Compared with 2D
images, 3D medical volumes involve information from different directions, e.g.,
transverse, sagittal, and coronal planes, so as to naturally provide
complementary views. These complementary views and the intrinsic similarity
among adjacent 3D slices inspire us to develop a novel annotation way and its
corresponding semi-supervised model for effective segmentation. Specifically,
we firstly propose the orthogonal annotation by only labeling two orthogonal
slices in a labeled volume, which significantly relieves the burden of
annotation. Then, we perform registration to obtain the initial pseudo labels
for sparsely labeled volumes. Subsequently, by introducing unlabeled volumes,
we propose a dual-network paradigm named Dense-Sparse Co-training (DeSCO) that
exploits dense pseudo labels in early stage and sparse labels in later stage
and meanwhile forces consistent output of two networks. Experimental results on
three benchmark datasets validated our effectiveness in performance and
efficiency in annotation. For example, with only 10 annotated slices, our
method reaches a Dice up to 86.93% on KiTS19 dataset.
Related papers
- Label-Efficient 3D Brain Segmentation via Complementary 2D Diffusion Models with Orthogonal Views [10.944692719150071]
We propose a novel 3D brain segmentation approach using complementary 2D diffusion models.
Our goal is to achieve reliable segmentation quality without requiring complete labels for each individual subject.
arXiv Detail & Related papers (2024-07-17T06:14:53Z) - Leveraging Fixed and Dynamic Pseudo-labels for Semi-supervised Medical Image Segmentation [7.9449756510822915]
Semi-supervised medical image segmentation has gained growing interest due to its ability to utilize unannotated data.
The current state-of-the-art methods mostly rely on pseudo-labeling within a co-training framework.
We propose a novel approach where multiple pseudo-labels for the same unannotated image are used to learn from the unlabeled data.
arXiv Detail & Related papers (2024-05-12T11:30:01Z) - Weakly Supervised 3D Instance Segmentation without Instance-level
Annotations [57.615325809883636]
3D semantic scene understanding tasks have achieved great success with the emergence of deep learning, but often require a huge amount of manually annotated training data.
We propose the first weakly-supervised 3D instance segmentation method that only requires categorical semantic labels as supervision.
By generating pseudo instance labels from categorical semantic labels, our designed approach can also assist existing methods for learning 3D instance segmentation at reduced annotation cost.
arXiv Detail & Related papers (2023-08-03T12:30:52Z) - 3D Medical Image Segmentation with Sparse Annotation via Cross-Teaching
between 3D and 2D Networks [26.29122638813974]
We propose a framework that can robustly learn from sparse annotation using the cross-teaching of both 3D and 2D networks.
Our experimental results on the MMWHS dataset demonstrate that our method outperforms the state-of-the-art (SOTA) semi-supervised segmentation methods.
arXiv Detail & Related papers (2023-07-30T15:26:17Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - You Only Need One Thing One Click: Self-Training for Weakly Supervised
3D Scene Understanding [107.06117227661204]
We propose One Thing One Click'', meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our model can be compatible to 3D instance segmentation equipped with a point-clustering strategy.
arXiv Detail & Related papers (2023-03-26T13:57:00Z) - Image Understands Point Cloud: Weakly Supervised 3D Semantic
Segmentation via Association Learning [59.64695628433855]
We propose a novel cross-modality weakly supervised method for 3D segmentation, incorporating complementary information from unlabeled images.
Basically, we design a dual-branch network equipped with an active labeling strategy, to maximize the power of tiny parts of labels.
Our method even outperforms the state-of-the-art fully supervised competitors with less than 1% actively selected annotations.
arXiv Detail & Related papers (2022-09-16T07:59:04Z) - PA-Seg: Learning from Point Annotations for 3D Medical Image
Segmentation using Contextual Regularization and Cross Knowledge Distillation [14.412073730567137]
We propose to annotate a segmentation target with only seven points in 3D medical images, and design a two-stage weakly supervised learning framework PA-Seg.
In the first stage, we employ geodesic distance transform to expand the seed points to provide more supervision signal.
In the second stage, we use predictions obtained by the model pre-trained in the first stage as pseudo labels.
arXiv Detail & Related papers (2022-08-11T07:00:33Z) - One Thing One Click: A Self-Training Approach for Weakly Supervised 3D
Semantic Segmentation [78.36781565047656]
We propose "One Thing One Click," meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our results are also comparable to those of the fully supervised counterparts.
arXiv Detail & Related papers (2021-04-06T02:27:25Z) - Cascaded Robust Learning at Imperfect Labels for Chest X-ray
Segmentation [61.09321488002978]
We present a novel cascaded robust learning framework for chest X-ray segmentation with imperfect annotation.
Our model consists of three independent network, which can effectively learn useful information from the peer networks.
Our methods could achieve a significant improvement on the accuracy in segmentation tasks compared to the previous methods.
arXiv Detail & Related papers (2021-04-05T15:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.