DEEPBEAS3D: Deep Learning and B-Spline Explicit Active Surfaces
- URL: http://arxiv.org/abs/2309.02335v1
- Date: Tue, 5 Sep 2023 15:54:35 GMT
- Title: DEEPBEAS3D: Deep Learning and B-Spline Explicit Active Surfaces
- Authors: Helena Williams and Jo\~ao Pedrosa and Muhammad Asad and Laura Cattani
and Tom Vercauteren and Jan Deprest and Jan D'hooge
- Abstract summary: We propose a novel 3D extension of an interactive segmentation framework that represents a segmentation from a convolutional neural network (CNN) as a B-spline explicit active surface (BEAS)
BEAS ensures segmentations are smooth in 3D space, increasing anatomical plausibility, while allowing the user to precisely edit the 3D surface.
Experimental results show that: 1) the proposed framework gives the user explicit control of the surface contour; 2) the perceived workload calculated via the NASA-TLX index was reduced by 30% compared to VOCAL; and 3) it required 7 0% (170 seconds) less user time than VOCAL
- Score: 3.560949684583438
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep learning-based automatic segmentation methods have become
state-of-the-art. However, they are often not robust enough for direct clinical
application, as domain shifts between training and testing data affect their
performance. Failure in automatic segmentation can cause sub-optimal results
that require correction. To address these problems, we propose a novel 3D
extension of an interactive segmentation framework that represents a
segmentation from a convolutional neural network (CNN) as a B-spline explicit
active surface (BEAS). BEAS ensures segmentations are smooth in 3D space,
increasing anatomical plausibility, while allowing the user to precisely edit
the 3D surface. We apply this framework to the task of 3D segmentation of the
anal sphincter complex (AS) from transperineal ultrasound (TPUS) images, and
compare it to the clinical tool used in the pelvic floor disorder clinic (4D
View VOCAL, GE Healthcare; Zipf, Austria). Experimental results show that: 1)
the proposed framework gives the user explicit control of the surface contour;
2) the perceived workload calculated via the NASA-TLX index was reduced by 30%
compared to VOCAL; and 3) it required 7 0% (170 seconds) less user time than
VOCAL (p< 0.00001)
Related papers
- PRISM Lite: A lightweight model for interactive 3D placenta segmentation in ultrasound [6.249772260759159]
Placenta volume measured from 3D ultrasound (3DUS) images is an important tool for tracking the growth trajectory and is associated with pregnancy outcomes.
Manual segmentation is the gold standard, but it is time-consuming and subjective.
We propose a lightweight interactive segmentation model aiming for clinical use to interactively segment the placenta from 3DUS images in real-time.
arXiv Detail & Related papers (2024-08-09T22:49:19Z) - Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning [52.249748801637196]
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
arXiv Detail & Related papers (2024-03-05T00:46:53Z) - Intraoperative 2D/3D Image Registration via Differentiable X-ray Rendering [5.617649111108429]
We present DiffPose, a self-supervised approach that leverages patient-specific simulation and differentiable physics-based rendering to achieve accurate 2D/3D registration without relying on manually labeled data.
DiffPose achieves sub-millimeter accuracy across surgical datasets at intraoperative speeds, improving upon existing unsupervised methods by an order of magnitude and even outperforming supervised baselines.
arXiv Detail & Related papers (2023-12-11T13:05:54Z) - Self-supervised learning via inter-modal reconstruction and feature
projection networks for label-efficient 3D-to-2D segmentation [4.5206601127476445]
We propose a novel convolutional neural network (CNN) and self-supervised learning (SSL) method for label-efficient 3D-to-2D segmentation.
Results on different datasets demonstrate that the proposed CNN significantly improves the state of the art in scenarios with limited labeled data by up to 8% in Dice score.
arXiv Detail & Related papers (2023-07-06T14:16:25Z) - UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation [93.88170217725805]
We propose a 3D medical image segmentation approach, named UNETR++, that offers both high-quality segmentation masks as well as efficiency in terms of parameters, compute cost, and inference speed.
The core of our design is the introduction of a novel efficient paired attention (EPA) block that efficiently learns spatial and channel-wise discriminative features.
Our evaluations on five benchmarks, Synapse, BTCV, ACDC, BRaTs, and Decathlon-Lung, reveal the effectiveness of our contributions in terms of both efficiency and accuracy.
arXiv Detail & Related papers (2022-12-08T18:59:57Z) - Interactive Segmentation via Deep Learning and B-Spline Explicit Active
Surfaces [4.879071251245923]
A novel interactive CNN-based segmentation framework is proposed in this work.
The interactive element of the framework allows the user to precisely edit the contour in real-time.
This framework was applied to the task of 2D segmentation of the levator hiatus from 2D ultrasound images.
arXiv Detail & Related papers (2021-10-25T13:17:53Z) - Volumetric Medical Image Segmentation: A 3D Deep Coarse-to-fine
Framework and Its Adversarial Examples [74.92488215859991]
We propose a novel 3D-based coarse-to-fine framework to efficiently tackle these challenges.
The proposed 3D-based framework outperforms their 2D counterparts by a large margin since it can leverage the rich spatial information along all three axes.
We conduct experiments on three datasets, the NIH pancreas dataset, the JHMI pancreas dataset and the JHMI pathological cyst dataset.
arXiv Detail & Related papers (2020-10-29T15:39:19Z) - Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
Ultrasound [74.22397862400177]
We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
arXiv Detail & Related papers (2020-10-19T13:56:22Z) - Improving Point Cloud Semantic Segmentation by Learning 3D Object
Detection [102.62963605429508]
Point cloud semantic segmentation plays an essential role in autonomous driving.
Current 3D semantic segmentation networks focus on convolutional architectures that perform great for well represented classes.
We propose a novel Aware 3D Semantic Detection (DASS) framework that explicitly leverages localization features from an auxiliary 3D object detection task.
arXiv Detail & Related papers (2020-09-22T14:17:40Z) - AutoHR: A Strong End-to-end Baseline for Remote Heart Rate Measurement
with Neural Searching [76.4844593082362]
We investigate the reason why existing end-to-end networks perform poorly in challenging conditions and establish a strong baseline for remote HR measurement with architecture search (NAS)
Comprehensive experiments are performed on three benchmark datasets on both intra-temporal and cross-dataset testing.
arXiv Detail & Related papers (2020-04-26T05:43:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.