Weakly Supervised Volumetric Segmentation via Self-taught Shape
Denoising Model
- URL: http://arxiv.org/abs/2104.13082v1
- Date: Tue, 27 Apr 2021 10:03:45 GMT
- Title: Weakly Supervised Volumetric Segmentation via Self-taught Shape
Denoising Model
- Authors: Qian He, Shuailin Li and Xuming He
- Abstract summary: We propose a novel weakly-supervised segmentation strategy capable of better capturing 3D shape prior in both model prediction and learning.
Our main idea is to extract a self-taught shape representation by leveraging weak labels, and then integrate this representation into segmentation prediction for shape refinement.
- Score: 27.013224147257198
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Weakly supervised segmentation is an important problem in medical image
analysis due to the high cost of pixelwise annotation. Prior methods, while
often focusing on weak labels of 2D images, exploit few structural cues of
volumetric medical images. To address this, we propose a novel
weakly-supervised segmentation strategy capable of better capturing 3D shape
prior in both model prediction and learning. Our main idea is to extract a
self-taught shape representation by leveraging weak labels, and then integrate
this representation into segmentation prediction for shape refinement. To this
end, we design a deep network consisting of a segmentation module and a shape
denoising module, which are trained by an iterative learning strategy.
Moreover, we introduce a weak annotation scheme with a hybrid label design for
volumetric images, which improves model learning without increasing the overall
annotation cost. The empirical experiments show that our approach outperforms
existing SOTA strategies on three organ segmentation benchmarks with
distinctive shape properties. Notably, we can achieve strong performance with
even 10\% labeled slices, which is significantly superior to other methods.
Related papers
- Volumetric Medical Image Segmentation via Scribble Annotations and Shape
Priors [3.774643767869751]
We propose a scribble-based volumetric image segmentation, Scribble2D5, which tackles 3D anisotropic image segmentation.
To achieve this, we augment a 2.5D attention UNet with a proposed label propagation module to extend semantic information from scribbles.
Also, we propose an optional add-on component, which incorporates the shape prior information from unpaired segmentation masks to further improve model accuracy.
arXiv Detail & Related papers (2023-10-12T07:17:14Z) - SwIPE: Efficient and Robust Medical Image Segmentation with Implicit Patch Embeddings [12.79344668998054]
We propose SwIPE (Segmentation with Implicit Patch Embeddings) to enable accurate local boundary delineation and global shape coherence.
We show that SwIPE significantly improves over recent implicit approaches and outperforms state-of-the-art discrete methods with over 10x fewer parameters.
arXiv Detail & Related papers (2023-07-23T20:55:11Z) - Semi-Supervised Single-View 3D Reconstruction via Prototype Shape Priors [79.80916315953374]
We propose SSP3D, a semi-supervised framework for 3D reconstruction.
We introduce an attention-guided prototype shape prior module for guiding realistic object reconstruction.
Our approach also performs well when transferring to real-world Pix3D datasets under labeling ratios of 10%.
arXiv Detail & Related papers (2022-09-30T11:19:25Z) - PA-Seg: Learning from Point Annotations for 3D Medical Image
Segmentation using Contextual Regularization and Cross Knowledge Distillation [14.412073730567137]
We propose to annotate a segmentation target with only seven points in 3D medical images, and design a two-stage weakly supervised learning framework PA-Seg.
In the first stage, we employ geodesic distance transform to expand the seed points to provide more supervision signal.
In the second stage, we use predictions obtained by the model pre-trained in the first stage as pseudo labels.
arXiv Detail & Related papers (2022-08-11T07:00:33Z) - One Sketch for All: One-Shot Personalized Sketch Segmentation [84.45203849671003]
We present the first one-shot personalized sketch segmentation method.
We aim to segment all sketches belonging to the same category with a single sketch with a given part annotation.
We preserve the parts semantics embedded in the exemplar, and we are robust to input style and abstraction.
arXiv Detail & Related papers (2021-12-20T20:10:44Z) - One-shot Weakly-Supervised Segmentation in Medical Images [12.184590794655517]
We present an innovative framework for 3D medical image segmentation with one-shot and weakly-supervised settings.
A propagation-reconstruction network is proposed to project scribbles from annotated volume to unlabeled 3D images.
A dual-level feature denoising module is designed to refine the scribbles based on anatomical- and pixel-level features.
arXiv Detail & Related papers (2021-11-21T09:14:13Z) - One Thing One Click: A Self-Training Approach for Weakly Supervised 3D
Semantic Segmentation [78.36781565047656]
We propose "One Thing One Click," meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our results are also comparable to those of the fully supervised counterparts.
arXiv Detail & Related papers (2021-04-06T02:27:25Z) - Three Ways to Improve Semantic Segmentation with Self-Supervised Depth
Estimation [90.87105131054419]
We present a framework for semi-supervised semantic segmentation, which is enhanced by self-supervised monocular depth estimation from unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset, where all three modules demonstrate significant performance gains.
arXiv Detail & Related papers (2020-12-19T21:18:03Z) - Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
Ultrasound [74.22397862400177]
We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
arXiv Detail & Related papers (2020-10-19T13:56:22Z) - Shape-aware Semi-supervised 3D Semantic Segmentation for Medical Images [24.216869988183092]
We propose a shapeaware semi-supervised segmentation strategy to leverage abundant unlabeled data and to enforce a geometric shape constraint on the segmentation output.
We develop a multi-task deep network that jointly predicts semantic segmentation and signed distance mapDM) of object surfaces.
Experiments show that our method outperforms current state-of-the-art approaches with improved shape estimation.
arXiv Detail & Related papers (2020-07-21T11:44:52Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.