3D Arterial Segmentation via Single 2D Projections and Depth Supervision
in Contrast-Enhanced CT Images
- URL: http://arxiv.org/abs/2309.08481v1
- Date: Fri, 15 Sep 2023 15:41:40 GMT
- Title: 3D Arterial Segmentation via Single 2D Projections and Depth Supervision
in Contrast-Enhanced CT Images
- Authors: Alina F. Dima, Veronika A. Zimmer, Martin J. Menten, Hongwei Bran Li,
Markus Graf, Tristan Lemke, Philipp Raffler, Robert Graf, Jan S. Kirschke,
Rickmer Braren, Daniel Rueckert
- Abstract summary: Training 3D deep networks requires large amounts of manual 3D annotations from experts.
We propose a novel method to segment the 3D peripancreatic arteries solely from one annotated 2D projection.
We demonstrate that by annotating a single, randomly chosen projection for each training sample, we obtain comparable performance to annotating multiple 2D projections.
- Score: 9.324710035242397
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated segmentation of the blood vessels in 3D volumes is an essential
step for the quantitative diagnosis and treatment of many vascular diseases. 3D
vessel segmentation is being actively investigated in existing works, mostly in
deep learning approaches. However, training 3D deep networks requires large
amounts of manual 3D annotations from experts, which are laborious to obtain.
This is especially the case for 3D vessel segmentation, as vessels are sparse
yet spread out over many slices and disconnected when visualized in 2D slices.
In this work, we propose a novel method to segment the 3D peripancreatic
arteries solely from one annotated 2D projection per training image with depth
supervision. We perform extensive experiments on the segmentation of
peripancreatic arteries on 3D contrast-enhanced CT images and demonstrate how
well we capture the rich depth information from 2D projections. We demonstrate
that by annotating a single, randomly chosen projection for each training
sample, we obtain comparable performance to annotating multiple 2D projections,
thereby reducing the annotation effort. Furthermore, by mapping the 2D labels
to the 3D space using depth information and incorporating this into training,
we almost close the performance gap between 3D supervision and 2D supervision.
Our code is available at: https://github.com/alinafdima/3Dseg-mip-depth.
Related papers
- Weakly Supervised Monocular 3D Detection with a Single-View Image [58.57978772009438]
Monocular 3D detection aims for precise 3D object localization from a single-view image.
We propose SKD-WM3D, a weakly supervised monocular 3D detection framework.
We show that SKD-WM3D surpasses the state-of-the-art clearly and is even on par with many fully supervised methods.
arXiv Detail & Related papers (2024-02-29T13:26:47Z) - 3D Vascular Segmentation Supervised by 2D Annotation of Maximum
Intensity Projection [33.34240545722551]
Vascular structure segmentation plays a crucial role in medical analysis and clinical applications.
Existing weakly supervised methods have exhibited suboptimal performance when handling sparse vascular structure.
Here, we employ maximum intensity projection (MIP) to decrease the dimensionality of 3D volume to 2D image for efficient annotation.
We introduce a weakly-supervised network that fuses 2D-3D deep features via MIP to further improve segmentation performance.
arXiv Detail & Related papers (2024-02-19T13:24:46Z) - Weakly Supervised 3D Object Detection via Multi-Level Visual Guidance [72.6809373191638]
We propose a framework to study how to leverage constraints between 2D and 3D domains without requiring any 3D labels.
Specifically, we design a feature-level constraint to align LiDAR and image features based on object-aware regions.
Second, the output-level constraint is developed to enforce the overlap between 2D and projected 3D box estimations.
Third, the training-level constraint is utilized by producing accurate and consistent 3D pseudo-labels that align with the visual data.
arXiv Detail & Related papers (2023-12-12T18:57:25Z) - Multi-View Representation is What You Need for Point-Cloud Pre-Training [22.55455166875263]
This paper proposes a novel approach to point-cloud pre-training that learns 3D representations by leveraging pre-trained 2D networks.
We train the 3D feature extraction network with the help of the novel 2D knowledge transfer loss.
Experimental results demonstrate that our pre-trained model can be successfully transferred to various downstream tasks.
arXiv Detail & Related papers (2023-06-05T03:14:54Z) - Joint Self-Supervised Image-Volume Representation Learning with
Intra-Inter Contrastive Clustering [31.52291149830299]
Self-supervised learning can overcome the lack of labeled training samples by learning feature representations from unlabeled data.
Most current SSL techniques in the medical field have been designed for either 2D images or 3D volumes.
We propose a novel framework for unsupervised joint learning on 2D and 3D data modalities.
arXiv Detail & Related papers (2022-12-04T18:57:44Z) - MvDeCor: Multi-view Dense Correspondence Learning for Fine-grained 3D
Segmentation [91.6658845016214]
We propose to utilize self-supervised techniques in the 2D domain for fine-grained 3D shape segmentation tasks.
We render a 3D shape from multiple views, and set up a dense correspondence learning task within the contrastive learning framework.
As a result, the learned 2D representations are view-invariant and geometrically consistent.
arXiv Detail & Related papers (2022-08-18T00:48:15Z) - Super Images -- A New 2D Perspective on 3D Medical Imaging Analysis [0.0]
We present a simple yet effective 2D method to handle 3D data while efficiently embedding the 3D knowledge during training.
Our method generates a super-resolution image by stitching slices side by side in the 3D image.
While attaining equal, if not superior, results to 3D networks utilizing only 2D counterparts, the model complexity is reduced by around threefold.
arXiv Detail & Related papers (2022-05-05T09:59:03Z) - 3D-to-2D Distillation for Indoor Scene Parsing [78.36781565047656]
We present a new approach that enables us to leverage 3D features extracted from large-scale 3D data repository to enhance 2D features extracted from RGB images.
First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training.
Second, we design a two-stage dimension normalization scheme to calibrate the 2D and 3D features for better integration.
Third, we design a semantic-aware adversarial training model to extend our framework for training with unpaired 3D data.
arXiv Detail & Related papers (2021-04-06T02:22:24Z) - Comparative Evaluation of 3D and 2D Deep Learning Techniques for
Semantic Segmentation in CT Scans [0.0]
We propose a 3D stack-based deep learning technique for segmenting manifestations of consolidation and ground-glass opacities in 3D Computed Tomography (CT) scans.
We present a comparison based on the segmentation results, the contextual information retained, and the inference time between this 3D technique and a traditional 2D deep learning technique.
The 3D technique results in a 5X reduction in the inference time compared to the 2D technique.
arXiv Detail & Related papers (2021-01-19T13:23:43Z) - Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic
Segmentation [87.54570024320354]
State-of-the-art methods for large-scale driving-scene LiDAR semantic segmentation often project and process the point clouds in the 2D space.
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
We develop a 3D cylinder partition and a 3D cylinder convolution based framework, termed as Cylinder3D, which exploits the 3D topology relations and structures of driving-scene point clouds.
arXiv Detail & Related papers (2020-08-04T13:56:19Z) - DSGN: Deep Stereo Geometry Network for 3D Object Detection [79.16397166985706]
There is a large performance gap between image-based and LiDAR-based 3D object detectors.
Our method, called Deep Stereo Geometry Network (DSGN), significantly reduces this gap.
For the first time, we provide a simple and effective one-stage stereo-based 3D detection pipeline.
arXiv Detail & Related papers (2020-01-10T11:44:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.