Interactive Segmentation via Deep Learning and B-Spline Explicit Active
Surfaces
- URL: http://arxiv.org/abs/2110.12939v1
- Date: Mon, 25 Oct 2021 13:17:53 GMT
- Title: Interactive Segmentation via Deep Learning and B-Spline Explicit Active
Surfaces
- Authors: Helena Williams, Jo\~ao Pedrosa, Laura Cattani, Susanne Housmans, Tom
Vercauteren, Jan Deprest, Jan D'hooge
- Abstract summary: A novel interactive CNN-based segmentation framework is proposed in this work.
The interactive element of the framework allows the user to precisely edit the contour in real-time.
This framework was applied to the task of 2D segmentation of the levator hiatus from 2D ultrasound images.
- Score: 4.879071251245923
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automatic medical image segmentation via convolutional neural networks (CNNs)
has shown promising results. However, they may not always be robust enough for
clinical use. Sub-optimal segmentation would require clinician's to manually
delineate the target object, causing frustration. To address this problem, a
novel interactive CNN-based segmentation framework is proposed in this work.
The aim is to represent the CNN segmentation contour as B-splines by utilising
B-spline explicit active surfaces (BEAS). The interactive element of the
framework allows the user to precisely edit the contour in real-time, and by
utilising BEAS it ensures the final contour is smooth and anatomically
plausible. This framework was applied to the task of 2D segmentation of the
levator hiatus from 2D ultrasound (US) images, and compared to the current
clinical tools used in pelvic floor disorder clinic (4DView, GE Healthcare;
Zipf, Austria). Experimental results show that: 1) the proposed framework is
more robust than current state-of-the-art CNNs; 2) the perceived workload
calculated via the NASA-TLX index was reduced more than half for the proposed
approach in comparison to current clinical tools; and 3) the proposed tool
requires at least 13 seconds less user time than the clinical tools, which was
significant (p=0.001).
Related papers
- DEEPBEAS3D: Deep Learning and B-Spline Explicit Active Surfaces [3.560949684583438]
We propose a novel 3D extension of an interactive segmentation framework that represents a segmentation from a convolutional neural network (CNN) as a B-spline explicit active surface (BEAS)
BEAS ensures segmentations are smooth in 3D space, increasing anatomical plausibility, while allowing the user to precisely edit the 3D surface.
Experimental results show that: 1) the proposed framework gives the user explicit control of the surface contour; 2) the perceived workload calculated via the NASA-TLX index was reduced by 30% compared to VOCAL; and 3) it required 7 0% (170 seconds) less user time than VOCAL
arXiv Detail & Related papers (2023-09-05T15:54:35Z) - UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation [93.88170217725805]
We propose a 3D medical image segmentation approach, named UNETR++, that offers both high-quality segmentation masks as well as efficiency in terms of parameters, compute cost, and inference speed.
The core of our design is the introduction of a novel efficient paired attention (EPA) block that efficiently learns spatial and channel-wise discriminative features.
Our evaluations on five benchmarks, Synapse, BTCV, ACDC, BRaTs, and Decathlon-Lung, reveal the effectiveness of our contributions in terms of both efficiency and accuracy.
arXiv Detail & Related papers (2022-12-08T18:59:57Z) - Real-time landmark detection for precise endoscopic submucosal
dissection via shape-aware relation network [51.44506007844284]
We propose a shape-aware relation network for accurate and real-time landmark detection in endoscopic submucosal dissection surgery.
We first devise an algorithm to automatically generate relation keypoint heatmaps, which intuitively represent the prior knowledge of spatial relations among landmarks.
We then develop two complementary regularization schemes to progressively incorporate the prior knowledge into the training process.
arXiv Detail & Related papers (2021-11-08T07:57:30Z) - MIDeepSeg: Minimally Interactive Segmentation of Unseen Objects from
Medical Images Using Deep Learning [15.01235930304888]
We propose a novel deep learning-based interactive segmentation method that has high efficiency due to only requiring clicks as user inputs.
Our proposed framework achieves accurate results with fewer user interactions and less time compared with state-of-the-art interactive frameworks.
arXiv Detail & Related papers (2021-04-25T14:15:17Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
Ultrasound [74.22397862400177]
We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
arXiv Detail & Related papers (2020-10-19T13:56:22Z) - DONet: Dual Objective Networks for Skin Lesion Segmentation [77.9806410198298]
We propose a simple yet effective framework, named Dual Objective Networks (DONet), to improve the skin lesion segmentation.
Our DONet adopts two symmetric decoders to produce different predictions for approaching different objectives.
To address the challenge of large variety of lesion scales and shapes in dermoscopic images, we additionally propose a recurrent context encoding module (RCEM)
arXiv Detail & Related papers (2020-08-19T06:02:46Z) - Collaborative Boundary-aware Context Encoding Networks for Error Map
Prediction [65.44752447868626]
We propose collaborative boundaryaware context encoding networks called AEP-Net for error prediction task.
Specifically, we propose a collaborative feature transformation branch for better feature fusion between images and masks, and precise localization of error regions.
The AEP-Net achieves an average DSC of 0.8358, 0.8164 for error prediction task, and shows a high Pearson correlation coefficient of 0.9873.
arXiv Detail & Related papers (2020-06-25T12:42:01Z) - Boundary-aware Context Neural Network for Medical Image Segmentation [15.585851505721433]
Medical image segmentation can provide reliable basis for further clinical analysis and disease diagnosis.
Most existing CNNs-based methods produce unsatisfactory segmentation mask without accurate object boundaries.
In this paper, we formulate a boundary-aware context neural network (BA-Net) for 2D medical image segmentation.
arXiv Detail & Related papers (2020-05-03T02:35:49Z) - SAUNet: Shape Attentive U-Net for Interpretable Medical Image
Segmentation [2.6837973648527926]
We present a new architecture called Shape Attentive U-Net (SAUNet) which focuses on model interpretability and robustness.
Our method achieves state-of-the-art results on the two large public cardiac MRI image segmentation datasets of SUN09 and AC17.
arXiv Detail & Related papers (2020-01-21T16:48:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.