Automated segmentation of microvessels in intravascular OCT images using
deep learning
- URL: http://arxiv.org/abs/2210.00166v1
- Date: Sat, 1 Oct 2022 02:14:14 GMT
- Title: Automated segmentation of microvessels in intravascular OCT images using
deep learning
- Authors: Juhwan Lee, Justin N. Kim, Lia Gomez-Perez, Yazan Gharaibeh, Issam
Motairek, Ga-briel T. R. Pereira, Vladislav N. Zimin, Luis A. P. Dallan,
Ammar Hoori, Sadeer Al-Kindi, Giulio Guagliumi, Hiram G. Bezerra, David L.
Wilson
- Abstract summary: We developed an automated deep learning method for detecting microvessels in intravascular optical coherence tomography (IV OCT) images.
A total of 8,403 IV OCT image frames from 85 lesions and 37 normal segments were analyzed.
Our method produced 698 image frames with microvessels present, compared to 730 from manual analysis, representing a 4.4% difference.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To analyze this characteristic of vulnerability, we developed an automated
deep learning method for detecting microvessels in intravascular optical
coherence tomography (IVOCT) images. A total of 8,403 IVOCT image frames from
85 lesions and 37 normal segments were analyzed. Manual annotation was done
using a dedicated software (OCTOPUS) previously developed by our group. Data
augmentation in the polar (r,{\theta}) domain was applied to raw IVOCT images
to ensure that microvessels appear at all possible angles. Pre-processing
methods included guidewire/shadow detection, lumen segmentation, pixel
shifting, and noise reduction. DeepLab v3+ was used to segment microvessel
candidates. A bounding box on each candidate was classified as either
microvessel or non-microvessel using a shallow convolutional neural network.
For better classification, we used data augmentation (i.e., angle rotation) on
bounding boxes with a microvessel during network training. Data augmentation
and pre-processing steps improved microvessel segmentation performance
significantly, yielding a method with Dice of 0.71+/-0.10 and pixel-wise
sensitivity/specificity of 87.7+/-6.6%/99.8+/-0.1%. The network for classifying
microvessels from candidates performed exceptionally well, with sensitivity of
99.5+/-0.3%, specificity of 98.8+/-1.0%, and accuracy of 99.1+/-0.5%. The
classification step eliminated the majority of residual false positives, and
the Dice coefficient increased from 0.71 to 0.73. In addition, our method
produced 698 image frames with microvessels present, compared to 730 from
manual analysis, representing a 4.4% difference. When compared to the manual
method, the automated method improved microvessel continuity, implying improved
segmentation performance. The method will be useful for research purposes as
well as potential future treatment planning.
Related papers
- Deepbet: Fast brain extraction of T1-weighted MRI using Convolutional
Neural Networks [0.40125518029941076]
deepbet builds a fast, high-precision brain extraction tool called deepbet.
Deepbet uses LinkNet, a modern UNet architecture, in a two stage prediction process.
Model accelerates brain extraction by a factor of 10 compared to current methods.
arXiv Detail & Related papers (2023-08-14T08:39:09Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - Corneal endothelium assessment in specular microscopy images with Fuchs'
dystrophy via deep regression of signed distance maps [48.498376125522114]
This paper proposes a UNet-based segmentation approach that requires minimal post-processing.
It achieves reliable CE morphometric assessment and guttae identification across all degrees of Fuchs' dystrophy.
arXiv Detail & Related papers (2022-10-13T15:34:20Z) - DenseUNets with feedback non-local attention for the segmentation of
specular microscopy images of the corneal endothelium with Fuchs dystrophy [2.4242495790574217]
We propose a new deep learning methodology that includes a novel attention mechanism named feedback non-local attention (fNLA)
Our approach first infers the cell edges, then selects the cells that are well detected, and finally applies a postprocessing method to correct mistakes.
Our approach handled the cells affected by guttae remarkably well, detecting cell edges occluded by small guttae while discarding areas covered by large guttae.
arXiv Detail & Related papers (2022-03-03T17:49:40Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Osteoporosis Prescreening using Panoramic Radiographs through a Deep
Convolutional Neural Network with Attention Mechanism [65.70943212672023]
Deep convolutional neural network (CNN) with an attention module can detect osteoporosis on panoramic radiographs.
dataset of 70 panoramic radiographs (PRs) from 70 different subjects of age between 49 to 60 was used.
arXiv Detail & Related papers (2021-10-19T00:03:57Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Automated Chest CT Image Segmentation of COVID-19 Lung Infection based
on 3D U-Net [0.0]
The coronavirus disease 2019 (COVID-19) affects billions of lives around the world and has a significant impact on public healthcare.
We propose an innovative automated segmentation pipeline for COVID-19 infected regions.
Our method focuses on on-the-fly generation of unique and random image patches for training by performing several preprocessing methods.
arXiv Detail & Related papers (2020-06-24T17:29:26Z) - COVIDLite: A depth-wise separable deep neural network with white balance
and CLAHE for detection of COVID-19 [1.1139113832077312]
COVIDLite is a combination of white balance followed by Contrast Limited Adaptive Histogram Equalization ( CLAHE) and depth-wise separable convolutional neural network (DSCNN)
The proposed COVIDLite method resulted in improved performance in comparison to vanilla DSCNN with no pre-processing.
The proposed method achieved higher accuracy of 99.58% for binary classification, whereas 96.43% for multiclass classification and out-performed various state-of-the-art methods.
arXiv Detail & Related papers (2020-06-19T02:30:34Z) - Coronavirus (COVID-19) Classification using Deep Features Fusion and
Ranking Technique [0.0]
A novel method was proposed as fusing and ranking deep features to detect COVID-19 in early phase.
The proposed method shows high performance on Subset-2 with 98.27% accuracy, 98.93% sensitivity, 97.60% specificity, 97.63% precision, 98.28% F1-score and 96.54% Matthews Correlation Coefficient (MCC) metrics.
arXiv Detail & Related papers (2020-04-07T20:43:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.