Volumetric Medical Image Segmentation via Scribble Annotations and Shape
Priors
- URL: http://arxiv.org/abs/2310.08084v1
- Date: Thu, 12 Oct 2023 07:17:14 GMT
- Title: Volumetric Medical Image Segmentation via Scribble Annotations and Shape
Priors
- Authors: Qiuhui Chen, Haiying Lyu, Xinyue Hu, Yong Lu, Yi Hong
- Abstract summary: We propose a scribble-based volumetric image segmentation, Scribble2D5, which tackles 3D anisotropic image segmentation.
To achieve this, we augment a 2.5D attention UNet with a proposed label propagation module to extend semantic information from scribbles.
Also, we propose an optional add-on component, which incorporates the shape prior information from unpaired segmentation masks to further improve model accuracy.
- Score: 3.774643767869751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, weakly-supervised image segmentation using weak annotations like
scribbles has gained great attention in computer vision and medical image
analysis, since such annotations are much easier to obtain compared to
time-consuming and labor-intensive labeling at the pixel/voxel level. However,
due to a lack of structure supervision on regions of interest (ROIs), existing
scribble-based methods suffer from poor boundary localization. Furthermore,
most current methods are designed for 2D image segmentation, which do not fully
leverage the volumetric information if directly applied to each image slice. In
this paper, we propose a scribble-based volumetric image segmentation,
Scribble2D5, which tackles 3D anisotropic image segmentation and aims to its
improve boundary prediction. To achieve this, we augment a 2.5D attention UNet
with a proposed label propagation module to extend semantic information from
scribbles and use a combination of static and active boundary prediction to
learn ROI's boundary and regularize its shape. Also, we propose an optional
add-on component, which incorporates the shape prior information from unpaired
segmentation masks to further improve model accuracy. Extensive experiments on
three public datasets and one private dataset demonstrate our Scribble2D5
achieves state-of-the-art performance on volumetric image segmentation using
scribbles and shape prior if available.
Related papers
- Open-Vocabulary 3D Semantic Segmentation with Text-to-Image Diffusion Models [57.37244894146089]
We propose Diff2Scene, which leverages frozen representations from text-image generative models, along with salient-aware and geometric-aware masks, for open-vocabulary 3D semantic segmentation and visual grounding tasks.
We show that it outperforms competitive baselines and achieves significant improvements over state-of-the-art methods.
arXiv Detail & Related papers (2024-07-18T16:20:56Z) - 3D Medical Image Segmentation with Sparse Annotation via Cross-Teaching
between 3D and 2D Networks [26.29122638813974]
We propose a framework that can robustly learn from sparse annotation using the cross-teaching of both 3D and 2D networks.
Our experimental results on the MMWHS dataset demonstrate that our method outperforms the state-of-the-art (SOTA) semi-supervised segmentation methods.
arXiv Detail & Related papers (2023-07-30T15:26:17Z) - SwIPE: Efficient and Robust Medical Image Segmentation with Implicit Patch Embeddings [12.79344668998054]
We propose SwIPE (Segmentation with Implicit Patch Embeddings) to enable accurate local boundary delineation and global shape coherence.
We show that SwIPE significantly improves over recent implicit approaches and outperforms state-of-the-art discrete methods with over 10x fewer parameters.
arXiv Detail & Related papers (2023-07-23T20:55:11Z) - Piecewise Planar Hulls for Semi-Supervised Learning of 3D Shape and Pose
from 2D Images [133.68032636906133]
We study the problem of estimating 3D shape and pose of an object in terms of keypoints, from a single 2D image.
The shape and pose are learned directly from images collected by categories and their partial 2D keypoint annotations.
arXiv Detail & Related papers (2022-11-14T16:18:11Z) - Image Understands Point Cloud: Weakly Supervised 3D Semantic
Segmentation via Association Learning [59.64695628433855]
We propose a novel cross-modality weakly supervised method for 3D segmentation, incorporating complementary information from unlabeled images.
Basically, we design a dual-branch network equipped with an active labeling strategy, to maximize the power of tiny parts of labels.
Our method even outperforms the state-of-the-art fully supervised competitors with less than 1% actively selected annotations.
arXiv Detail & Related papers (2022-09-16T07:59:04Z) - Scribble2D5: Weakly-Supervised Volumetric Image Segmentation via
Scribble Annotations [5.400947282838267]
weakly-supervised image segmentation using weak annotations like scribbles has gained great attention.
Existing scribble-based methods suffer from poor boundary localization.
We propose Scribble2D5, which tackles 3D anisotropic image segmentation and improves boundary prediction.
arXiv Detail & Related papers (2022-05-13T17:04:10Z) - Weakly Supervised Volumetric Segmentation via Self-taught Shape
Denoising Model [27.013224147257198]
We propose a novel weakly-supervised segmentation strategy capable of better capturing 3D shape prior in both model prediction and learning.
Our main idea is to extract a self-taught shape representation by leveraging weak labels, and then integrate this representation into segmentation prediction for shape refinement.
arXiv Detail & Related papers (2021-04-27T10:03:45Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
Ultrasound [74.22397862400177]
We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
arXiv Detail & Related papers (2020-10-19T13:56:22Z) - Shape-aware Semi-supervised 3D Semantic Segmentation for Medical Images [24.216869988183092]
We propose a shapeaware semi-supervised segmentation strategy to leverage abundant unlabeled data and to enforce a geometric shape constraint on the segmentation output.
We develop a multi-task deep network that jointly predicts semantic segmentation and signed distance mapDM) of object surfaces.
Experiments show that our method outperforms current state-of-the-art approaches with improved shape estimation.
arXiv Detail & Related papers (2020-07-21T11:44:52Z) - Refined Plane Segmentation for Cuboid-Shaped Objects by Leveraging Edge
Detection [63.942632088208505]
We propose a post-processing algorithm to align the segmented plane masks with edges detected in the image.
This allows us to increase the accuracy of state-of-the-art approaches, while limiting ourselves to cuboid-shaped objects.
arXiv Detail & Related papers (2020-03-28T18:51:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.