Segmentation of Anatomical Layers and Artifacts in Intravascular
Polarization Sensitive Optical Coherence Tomography Using Attending Physician
and Boundary Cardinality Lost Terms
- URL: http://arxiv.org/abs/2105.05137v1
- Date: Tue, 11 May 2021 15:52:31 GMT
- Title: Segmentation of Anatomical Layers and Artifacts in Intravascular
Polarization Sensitive Optical Coherence Tomography Using Attending Physician
and Boundary Cardinality Lost Terms
- Authors: Mohammad Haft-Javaherian, Martin Villiger, Kenichiro Otsuka, Joost
Daemen, Peter Libby, Polina Golland, and Brett E. Bouma
- Abstract summary: Intravascular ultrasound and optical coherence tomography are widely available for characterizing coronary stenoses.
We propose a convolutional neural network model and optimize its performance using a new multi-term loss function.
Our model segments two classes of major artifacts and detects the anatomical layers within the thickened vessel wall regions.
- Score: 4.93836246080317
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cardiovascular diseases are the leading cause of death and require a spectrum
of diagnostic procedures as well as invasive interventions. Medical imaging is
a vital part of the healthcare system, facilitating both diagnosis and guidance
for intervention. Intravascular ultrasound and optical coherence tomography are
widely available for characterizing coronary stenoses and provide critical
vessel parameters to optimize percutaneous intervention. Intravascular
polarization-sensitive optical coherence tomography (PS-OCT) can simultaneously
provide high-resolution cross-sectional images of vascular structures while
also revealing preponderant tissue components such as collagen and smooth
muscle and thereby enhance plaque characterization. Automated interpretation of
these features would facilitate the objective clinical investigation of the
natural history and significance of coronary atheromas. Here, we propose a
convolutional neural network model and optimize its performance using a new
multi-term loss function to classify the lumen, intima, and media layers in
addition to the guidewire and plaque artifacts. Our multi-class classification
model outperforms the state-of-the-art methods in detecting the anatomical
layers based on accuracy, Dice coefficient, and average boundary error.
Furthermore, the proposed model segments two classes of major artifacts and
detects the anatomical layers within the thickened vessel wall regions, which
were excluded from analysis by other studies. The source code and the trained
model are publicly available at https://github.com/mhaft/OCTseg .
Related papers
- CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Spatiotemporal Disentanglement of Arteriovenous Malformations in Digital
Subtraction Angiography [37.44819725897024]
The presented method aims to enhance Digital Subtraction Angiography (DSA) image series by highlighting critical information via automatic classification of vessels.
The method was tested on clinical DSA images series and demonstrated efficient differentiation between arteries and veins.
arXiv Detail & Related papers (2024-02-15T00:29:53Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Morphology-based non-rigid registration of coronary computed tomography and intravascular images through virtual catheter path optimization [0.2631367460046713]
We present a morphology-based framework for the rigid and non-rigid matching of intravascular images to CCTA images.
Our framework reduces the manual effort required to conduct large-scale multi-modal clinical studies.
arXiv Detail & Related papers (2022-12-30T21:48:32Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection [76.01333073259677]
We propose the use of Space-aware Memory Queues for In-painting and Detecting anomalies from radiography images (abbreviated as SQUID)
We show that SQUID can taxonomize the ingrained anatomical structures into recurrent patterns; and in the inference, it can identify anomalies (unseen/modified patterns) in the image.
arXiv Detail & Related papers (2021-11-26T13:47:34Z) - Automatic Segmentation of the Optic Nerve Head Region in Optical
Coherence Tomography: A Methodological Review [4.777796444711511]
The optic nerve head represents the intraocular section of the optic nerve (ONH)
The advent of optical coherence tomography has enabled the evaluation of novel optic nerve head parameters.
Deep learning-based algorithms provide the highest accuracy, sensitivity and specificity for segmenting the different structures of the ONH.
arXiv Detail & Related papers (2021-09-06T09:45:57Z) - Automated Detection of Coronary Artery Stenosis in X-ray Angiography
using Deep Neural Networks [0.0]
We propose a two-step deep-learning framework to partially automate the detection of stenosis from X-ray coronary angiography images.
We achieved a 0.97 accuracy on the task of classifying the Left/Right Coronary Artery angle view and 0.68/0.73 recall on the determination of the regions of interest, for LCA and RCA, respectively.
arXiv Detail & Related papers (2021-03-04T11:45:54Z) - Assignment Flow for Order-Constrained OCT Segmentation [0.0]
The identification of retinal layer thicknesses serves as an essential task be done for each patient separately.
The elaboration of automated segmentation models has become an important task in the field of medical image processing.
We propose a novel, purely data driven textitgeometric approach to order-constrained 3D OCT retinal cell layer segmentation
arXiv Detail & Related papers (2020-09-10T01:57:53Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.