3D Medical Image Segmentation with Sparse Annotation via Cross-Teaching
between 3D and 2D Networks
- URL: http://arxiv.org/abs/2307.16256v1
- Date: Sun, 30 Jul 2023 15:26:17 GMT
- Title: 3D Medical Image Segmentation with Sparse Annotation via Cross-Teaching
between 3D and 2D Networks
- Authors: Heng Cai, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao
- Abstract summary: We propose a framework that can robustly learn from sparse annotation using the cross-teaching of both 3D and 2D networks.
Our experimental results on the MMWHS dataset demonstrate that our method outperforms the state-of-the-art (SOTA) semi-supervised segmentation methods.
- Score: 26.29122638813974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image segmentation typically necessitates a large and precisely
annotated dataset. However, obtaining pixel-wise annotation is a
labor-intensive task that requires significant effort from domain experts,
making it challenging to obtain in practical clinical scenarios. In such
situations, reducing the amount of annotation required is a more practical
approach. One feasible direction is sparse annotation, which involves
annotating only a few slices, and has several advantages over traditional weak
annotation methods such as bounding boxes and scribbles, as it preserves exact
boundaries. However, learning from sparse annotation is challenging due to the
scarcity of supervision signals. To address this issue, we propose a framework
that can robustly learn from sparse annotation using the cross-teaching of both
3D and 2D networks. Considering the characteristic of these networks, we
develop two pseudo label selection strategies, which are hard-soft confidence
threshold and consistent label fusion. Our experimental results on the MMWHS
dataset demonstrate that our method outperforms the state-of-the-art (SOTA)
semi-supervised segmentation methods. Moreover, our approach achieves results
that are comparable to the fully-supervised upper bound result.
Related papers
- Label-Efficient 3D Brain Segmentation via Complementary 2D Diffusion Models with Orthogonal Views [10.944692719150071]
We propose a novel 3D brain segmentation approach using complementary 2D diffusion models.
Our goal is to achieve reliable segmentation quality without requiring complete labels for each individual subject.
arXiv Detail & Related papers (2024-07-17T06:14:53Z) - 2D Feature Distillation for Weakly- and Semi-Supervised 3D Semantic
Segmentation [92.17700318483745]
We propose an image-guidance network (IGNet) which builds upon the idea of distilling high level feature information from a domain adapted synthetically trained 2D semantic segmentation network.
IGNet achieves state-of-the-art results for weakly-supervised LiDAR semantic segmentation on ScribbleKITTI, boasting up to 98% relative performance to fully supervised training with only 8% labeled points.
arXiv Detail & Related papers (2023-11-27T07:57:29Z) - Weakly Supervised 3D Instance Segmentation without Instance-level
Annotations [57.615325809883636]
3D semantic scene understanding tasks have achieved great success with the emergence of deep learning, but often require a huge amount of manually annotated training data.
We propose the first weakly-supervised 3D instance segmentation method that only requires categorical semantic labels as supervision.
By generating pseudo instance labels from categorical semantic labels, our designed approach can also assist existing methods for learning 3D instance segmentation at reduced annotation cost.
arXiv Detail & Related papers (2023-08-03T12:30:52Z) - SwIPE: Efficient and Robust Medical Image Segmentation with Implicit Patch Embeddings [12.79344668998054]
We propose SwIPE (Segmentation with Implicit Patch Embeddings) to enable accurate local boundary delineation and global shape coherence.
We show that SwIPE significantly improves over recent implicit approaches and outperforms state-of-the-art discrete methods with over 10x fewer parameters.
arXiv Detail & Related papers (2023-07-23T20:55:11Z) - Orthogonal Annotation Benefits Barely-supervised Medical Image
Segmentation [24.506059129303424]
Recent trends in semi-supervised learning have boosted the performance of 3D semi-supervised medical image segmentation.
These views and the intrinsic similarity among adjacent 3D slices inspire us to develop a novel annotation way.
We propose a dual-network paradigm named Dense-Sparse Co-training (DeSCO) that exploits dense pseudo labels in early stage and sparse labels in later stage.
arXiv Detail & Related papers (2023-03-23T08:10:25Z) - Image Understands Point Cloud: Weakly Supervised 3D Semantic
Segmentation via Association Learning [59.64695628433855]
We propose a novel cross-modality weakly supervised method for 3D segmentation, incorporating complementary information from unlabeled images.
Basically, we design a dual-branch network equipped with an active labeling strategy, to maximize the power of tiny parts of labels.
Our method even outperforms the state-of-the-art fully supervised competitors with less than 1% actively selected annotations.
arXiv Detail & Related papers (2022-09-16T07:59:04Z) - Collaborative Propagation on Multiple Instance Graphs for 3D Instance
Segmentation with Single-point Supervision [63.429704654271475]
We propose a novel weakly supervised method RWSeg that only requires labeling one object with one point.
With these sparse weak labels, we introduce a unified framework with two branches to propagate semantic and instance information.
Specifically, we propose a Cross-graph Competing Random Walks (CRW) algorithm that encourages competition among different instance graphs.
arXiv Detail & Related papers (2022-08-10T02:14:39Z) - Hypernet-Ensemble Learning of Segmentation Probability for Medical Image
Segmentation with Ambiguous Labels [8.841870931360585]
Deep Learning approaches are notoriously overconfident about their prediction with highly polarized label probability.
This is often not desirable for many applications with the inherent label ambiguity even in human annotations.
We propose novel methods to improve the segmentation probability estimation without sacrificing performance in a real-world scenario.
arXiv Detail & Related papers (2021-12-13T14:24:53Z) - Grasp-Oriented Fine-grained Cloth Segmentation without Real Supervision [66.56535902642085]
This paper tackles the problem of fine-grained region detection in deformed clothes using only a depth image.
We define up to 6 semantic regions of varying extent, including edges on the neckline, sleeve cuffs, and hem, plus top and bottom grasping points.
We introduce a U-net based network to segment and label these parts.
We show that training our network solely with synthetic data and the proposed DA yields results competitive with models trained on real data.
arXiv Detail & Related papers (2021-10-06T16:31:20Z) - One Thing One Click: A Self-Training Approach for Weakly Supervised 3D
Semantic Segmentation [78.36781565047656]
We propose "One Thing One Click," meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our results are also comparable to those of the fully supervised counterparts.
arXiv Detail & Related papers (2021-04-06T02:27:25Z) - Shape-aware Semi-supervised 3D Semantic Segmentation for Medical Images [24.216869988183092]
We propose a shapeaware semi-supervised segmentation strategy to leverage abundant unlabeled data and to enforce a geometric shape constraint on the segmentation output.
We develop a multi-task deep network that jointly predicts semantic segmentation and signed distance mapDM) of object surfaces.
Experiments show that our method outperforms current state-of-the-art approaches with improved shape estimation.
arXiv Detail & Related papers (2020-07-21T11:44:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.