A Deep Learning-based Method to Extract Lumen and Media-Adventitia in
Intravascular Ultrasound Images
- URL: http://arxiv.org/abs/2102.10480v1
- Date: Sun, 21 Feb 2021 00:10:05 GMT
- Title: A Deep Learning-based Method to Extract Lumen and Media-Adventitia in
Intravascular Ultrasound Images
- Authors: Fubao Zhu, Zhengyuan Gao, Chen Zhao, Hanlei Zhu, Yong Dong, Jingfeng
Jiang, Neng Dai, Weihua Zhou
- Abstract summary: Intravascular ultrasound (IVUS) imaging allows direct visualization of the coronary vessel wall.
Current segmentation relies on manual operations, which is time-consuming and user-dependent.
In this paper, we aim to develop a deep learning-based method using an encoder-decoder deep architecture.
- Score: 3.2963079183841297
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intravascular ultrasound (IVUS) imaging allows direct visualization of the
coronary vessel wall and is suitable for the assessment of atherosclerosis and
the degree of stenosis. Accurate segmentation and measurements of lumen and
median-adventitia (MA) from IVUS are essential for such a successful clinical
evaluation. However, current segmentation relies on manual operations, which is
time-consuming and user-dependent. In this paper, we aim to develop a deep
learning-based method using an encoder-decoder deep architecture to
automatically extract both lumen and MA border. Our method named IVUS-U-Net++
is an extension of the well-known U-Net++ model. More specifically, a feature
pyramid network was added to the U-Net++ model, enabling the utilization of
feature maps at different scales. As a result, the accuracy of the probability
map and subsequent segmentation have been improved We collected 1746 IVUS
images from 18 patients in this study. The whole dataset was split into a
training dataset (1572 images) for the 10-fold cross-validation and a test
dataset (174 images) for evaluating the performance of models. Our IVUS-U-Net++
segmentation model achieved a Jaccard measure (JM) of 0.9412, a Hausdorff
distance (HD) of 0.0639 mm for the lumen border, and a JM of 0.9509, an HD of
0.0867 mm for the MA border, respectively. Moreover, the Pearson correlation
and Bland-Altman analyses were performed to evaluate the correlations of 12
clinical parameters measured from our segmentation results and the ground
truth, and automatic measurements agreed well with those from the ground truth
(all Ps<0.01). In conclusion, our preliminary results demonstrate that the
proposed IVUS-U-Net++ model has great promise for clinical use.
Related papers
- SMILE-UHURA Challenge -- Small Vessel Segmentation at Mesoscopic Scale from Ultra-High Resolution 7T Magnetic Resonance Angiograms [60.35639972035727]
The lack of publicly available annotated datasets has impeded the development of robust, machine learning-driven segmentation algorithms.
The SMILE-UHURA challenge addresses the gap in publicly available annotated datasets by providing an annotated dataset of Time-of-Flight angiography acquired with 7T MRI.
Dice scores reached up to 0.838 $pm$ 0.066 and 0.716 $pm$ 0.125 on the respective datasets, with an average performance of up to 0.804 $pm$ 0.15.
arXiv Detail & Related papers (2024-11-14T17:06:00Z) - A novel open-source ultrasound dataset with deep learning benchmarks for
spinal cord injury localization and anatomical segmentation [1.02101998415327]
We present an ultrasound dataset of 10,223-mode (B-mode) images consisting of sagittal slices of porcine spinal cords.
We benchmark the performance metrics of several state-of-the-art object detection algorithms to localize the site of injury.
We evaluate the zero-shot generalization capabilities of the segmentation models on human ultrasound spinal cord images.
arXiv Detail & Related papers (2024-09-24T20:22:59Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - A quality assurance framework for real-time monitoring of deep learning
segmentation models in radiotherapy [3.5752677591512487]
This work uses cardiac substructure segmentation as an example task to establish a quality assurance framework.
A benchmark dataset consisting of Computed Tomography (CT) images along with manual cardiac delineations of 241 patients was collected.
An image domain shift detector was developed by utilizing a trained Denoising autoencoder (DAE) and two hand-engineered features.
A regression model was trained to predict the per-patient segmentation accuracy, measured by Dice similarity coefficient (DSC)
arXiv Detail & Related papers (2023-05-19T14:51:05Z) - Deep learning-based detection of intravenous contrast in computed
tomography scans [0.7313653675718069]
Identifying intravenous (IV) contrast use within CT scans is a key component of data curation for model development and testing.
We developed and validated a CNN-based deep learning platform to identify IV contrast within CT scans.
arXiv Detail & Related papers (2021-10-16T00:46:45Z) - Systematic Clinical Evaluation of A Deep Learning Method for Medical
Image Segmentation: Radiosurgery Application [48.89674088331313]
We systematically evaluate a Deep Learning (DL) method in a 3D medical image segmentation task.
Our method is integrated into the radiosurgery treatment process and directly impacts the clinical workflow.
arXiv Detail & Related papers (2021-08-21T16:15:40Z) - Chest x-ray automated triage: a semiologic approach designed for
clinical implementation, exploiting different types of labels through a
combination of four Deep Learning architectures [83.48996461770017]
This work presents a Deep Learning method based on the late fusion of different convolutional architectures.
We built four training datasets combining images from public chest x-ray datasets and our institutional archive.
We trained four different Deep Learning architectures and combined their outputs with a late fusion strategy, obtaining a unified tool.
arXiv Detail & Related papers (2020-12-23T14:38:35Z) - Appearance Learning for Image-based Motion Estimation in Tomography [60.980769164955454]
In tomographic imaging, anatomical structures are reconstructed by applying a pseudo-inverse forward model to acquired signals.
Patient motion corrupts the geometry alignment in the reconstruction process resulting in motion artifacts.
We propose an appearance learning approach recognizing the structures of rigid motion independently from the scanned object.
arXiv Detail & Related papers (2020-06-18T09:49:11Z) - A Deep Learning-Based Method for Automatic Segmentation of Proximal
Femur from Quantitative Computed Tomography Images [5.731199807877257]
We developed a 3D image segmentation method based V on-Net, an end-to-end fully convolutional neural network (CNN)
We performed experiments to evaluate the effectiveness of the proposed segmentation method.
arXiv Detail & Related papers (2020-06-09T21:16:47Z) - Deep Learning Based Detection and Localization of Intracranial Aneurysms
in Computed Tomography Angiography [5.973882600944421]
A two-step model was implemented: a 3D region proposal network for initial aneurysm detection and 3D DenseNetsfor false-positive reduction.
Our model showed statistically higher accuracy, sensitivity, and specificity when compared to the available model at 0.25 FPPV and the best F-1 score.
arXiv Detail & Related papers (2020-05-22T10:49:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.