A Deep Learning-Based and Fully Automated Pipeline for Regurgitant
Mitral Valve Anatomy Analysis from 3D Echocardiography
- URL: http://arxiv.org/abs/2302.10634v1
- Date: Tue, 21 Feb 2023 12:48:44 GMT
- Title: A Deep Learning-Based and Fully Automated Pipeline for Regurgitant
Mitral Valve Anatomy Analysis from 3D Echocardiography
- Authors: Riccardo Munaf\`o, Simone Saitta, Giacomo Ingallina, Paolo Denti,
Francesco Maisano, Eustachio Agricola, Alberto Redaelli, Emiliano Votta
- Abstract summary: 3D transesophageal echocardiography (3DTEE) is the recommended method for diagnosing mitral regurgitation (MR)
manual TEE segmentations are time-consuming and prone to intra-operator variability, affecting the reliability of the measurements.
We developed a fully automated pipeline using a 3D convolutional neural network (CNN) to segment MV substructures and quantify MV anatomy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D transesophageal echocardiography (3DTEE), is the recommended method for
diagnosing mitral regurgitation (MR). 3DTEE provides a high-quality 3D image of
the mitral valve (MV), allowing for precise segmentation and measurement of the
regurgitant valve anatomy. However, manual TEE segmentations are time-consuming
and prone to intra-operator variability, affecting the reliability of the
measurements. To address this, we developed a fully automated pipeline using a
3D convolutional neural network (CNN) to segment MV substructures (annulus,
anterior leaflet, and posterior leaflet) and quantify MV anatomy. The 3D CNN,
based on a multi-decoder residual U-Net architecture, was trained and tested on
a dataset comprising 100 3DTEE images with corresponding segmentations. Within
the pipeline, a custom algorithm refines the CNN-based segmentations and
extracts MV models, from which anatomical landmarks and features are
quantified. The accuracy of the proposed method was assessed using Dice score
and mean surface distance (MSD) against ground truth segmentations, and the
extracted anatomical parameters were compared against a semiautomated
commercial software TomTec Image Arena. The trained 3D CNN achieved an average
Dice score of 0.79 and MSD of 0.47 mm for the combined segmentation of the
annulus, anterior and posterior leaflet. The proposed CNN architecture
outperformed a baseline residual U-Net architecture in MV substructure
segmentation, and the refinement of the predicted annulus segmentation improved
MSD by 8.36%. The annular and leaflet linear measurements differed by less than
7.94 mm and 3.67 mm, respectively, compared to the 3D measurements obtained
with TomTec Image Arena. The proposed pipeline was faster than the commercial
software, with a modeling time of 12.54 s and a quantification time of 54.42 s.
Related papers
- A weakly-supervised deep learning model for fast localisation and delineation of the skeleton, internal organs, and spinal canal on Whole-Body Diffusion-Weighted MRI (WB-DWI) [0.0]
Apparent Diffusion Coefficient (ADC) values and Total Diffusion Volume (TDV) from Whole-body diffusion-weighted MRI (WB-DWI) are recognized cancer imaging biomarkers.
As a first step, we propose an algorithm to generate fast and reproducible probability maps of the skeleton, adjacent internal organs (liver, spleen, urinary bladder, and kidneys), and spinal canal.
arXiv Detail & Related papers (2025-03-26T17:03:46Z) - Towards Patient-Specific Surgical Planning for Bicuspid Aortic Valve Repair: Fully Automated Segmentation of the Aortic Valve in 4D CT [0.0732099897993399]
The bicuspid aortic valve (BAV) is the most prevalent congenital heart defect and may require surgery for complications such as stenosis, regurgitation, and aortopathy.
Contrast-enhanced 4D computed tomography (CT) produces volumetric temporal sequences with excellent contrast and spatial resolution.
Deep learning-based methods are capable of fully automated segmentation, but no BAV-specific model exists.
arXiv Detail & Related papers (2025-02-13T22:43:43Z) - Deep-Motion-Net: GNN-based volumetric organ shape reconstruction from single-view 2D projections [1.8189671456038365]
We propose an end-to-end graph neural network architecture that enables 3D organ shape reconstruction during radiotherapy.
The proposed model learns the mesh regression from a patient-specific template and deep features extracted from kV images at arbitrary projection angles.
Overall framework was tested quantitatively on synthetic respiratory motion scenarios and qualitatively on in-treatment images acquired over full scan series for liver cancer patients.
arXiv Detail & Related papers (2024-07-09T09:07:18Z) - Vision Transformers increase efficiency of 3D cardiac CT multi-label
segmentation [0.0]
Two cardiac computed tomography (CT) datasets were used to train networks to segment multiple regions representing the whole heart in 3D.
The segmented regions included the left and right atrium and ventricle, left ventricular myocardium, ascending aorta, pulmonary arteries, pulmonary veins, and left atrial appendage.
arXiv Detail & Related papers (2023-10-13T13:35:19Z) - On the Localization of Ultrasound Image Slices within Point Distribution
Models [84.27083443424408]
Thyroid disorders are most commonly diagnosed using high-resolution Ultrasound (US)
Longitudinal tracking is a pivotal diagnostic protocol for monitoring changes in pathological thyroid morphology.
We present a framework for automated US image slice localization within a 3D shape representation.
arXiv Detail & Related papers (2023-09-01T10:10:46Z) - Multi-class point cloud completion networks for 3D cardiac anatomy
reconstruction from cine magnetic resonance images [4.1448595037512925]
We propose a novel fully automatic surface reconstruction pipeline capable of reconstructing multi-class 3D cardiac anatomy meshes.
Its key component is a multi-class point cloud completion network (PCCN) capable of correcting both the sparsity and misalignment issues of the 3D reconstruction task.
arXiv Detail & Related papers (2023-07-17T14:52:52Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - CNN-based fully automatic wrist cartilage volume quantification in MR
Image [55.41644538483948]
The U-net convolutional neural network with additional attention layers provides the best wrist cartilage segmentation performance.
The error of cartilage volume measurement should be assessed independently using a non-MRI method.
arXiv Detail & Related papers (2022-06-22T14:19:06Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Three-Dimensional Embedded Attentive RNN (3D-EAR) Segmentor for Left
Ventricle Delineation from Myocardial Velocity Mapping [1.8653386811342048]
We propose a novel fully automated framework incorporating a 3D-UNet backbone architecture with Embedded multichannel Attention mechanism and LSTM based Recurrent neural networks (RNN) for the MVM-CMR datasets.
By comparing the baseline model of 3D-UNet and ablation studies with and without embedded attentive LSTM modules and various loss functions, we can demonstrate that the proposed model has outperformed the state-of-the-art baseline models with significant improvement.
arXiv Detail & Related papers (2021-04-26T11:04:43Z) - SCPM-Net: An Anchor-free 3D Lung Nodule Detection Network using Sphere
Representation and Center Points Matching [47.79483848496141]
We propose a 3D sphere representation-based center-points matching detection network (SCPM-Net)
It is anchor-free and automatically predicts the position, radius, and offset of nodules without the manual design of nodule/anchor parameters.
We show that our proposed SCPM-Net framework achieves superior performance compared with existing used anchor-based and anchor-free methods for lung nodule detection.
arXiv Detail & Related papers (2021-04-12T05:51:29Z) - Deep Implicit Statistical Shape Models for 3D Medical Image Delineation [47.78425002879612]
3D delineation of anatomical structures is a cardinal goal in medical imaging analysis.
Prior to deep learning, statistical shape models that imposed anatomical constraints and produced high quality surfaces were a core technology.
We present deep implicit statistical shape models (DISSMs), a new approach to delineation that marries the representation power of CNNs with the robustness of SSMs.
arXiv Detail & Related papers (2021-04-07T01:15:06Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Learning Hybrid Representations for Automatic 3D Vessel Centerline
Extraction [57.74609918453932]
Automatic blood vessel extraction from 3D medical images is crucial for vascular disease diagnoses.
Existing methods may suffer from discontinuities of extracted vessels when segmenting such thin tubular structures from 3D images.
We argue that preserving the continuity of extracted vessels requires to take into account the global geometry.
We propose a hybrid representation learning approach to address this challenge.
arXiv Detail & Related papers (2020-12-14T05:22:49Z) - Segmentation-free Estimation of Aortic Diameters from MRI Using Deep
Learning [2.231365407061881]
We propose a supervised deep learning method for the direct estimation of aortic diameters.
Our approach makes use of a 3D+2D convolutional neural network (CNN) that takes as input a 3D scan and outputs the aortic diameter at a given location.
Overall, the 3D+2D CNN achieved a mean absolute error between 2.2-2.4 mm depending on the considered aortic location.
arXiv Detail & Related papers (2020-09-09T18:28:00Z) - Multi-modal segmentation of 3D brain scans using neural networks [0.0]
Deep convolutional neural networks are trained to segment 3D MRI (MPRAGE, DWI, FLAIR) and CT scans.
segmentation quality is quantified using the Dice metric for a total of 27 anatomical structures.
arXiv Detail & Related papers (2020-08-11T09:13:54Z) - 4D Spatio-Temporal Convolutional Networks for Object Position Estimation
in OCT Volumes [69.62333053044712]
3D convolutional neural networks (CNNs) have shown promising performance for pose estimation of a marker object using single OCT images.
We extend 3D CNNs to 4D-temporal CNNs to evaluate the impact of additional temporal information for marker object tracking.
arXiv Detail & Related papers (2020-07-02T12:02:20Z) - Deep Negative Volume Segmentation [60.44793799306154]
We propose a new angle to the 3D segmentation task: segment empty spaces between all the tissues surrounding the object.
Our approach is an end-to-end pipeline that comprises a V-Net for bone segmentation.
We validate the idea on the CT scans in a 50-patient dataset, annotated by experts in maxillofacial medicine.
arXiv Detail & Related papers (2020-06-22T16:55:23Z) - Appearance Learning for Image-based Motion Estimation in Tomography [60.980769164955454]
In tomographic imaging, anatomical structures are reconstructed by applying a pseudo-inverse forward model to acquired signals.
Patient motion corrupts the geometry alignment in the reconstruction process resulting in motion artifacts.
We propose an appearance learning approach recognizing the structures of rigid motion independently from the scanned object.
arXiv Detail & Related papers (2020-06-18T09:49:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.