Deep-Learning-based Vasculature Extraction for Single-Scan Optical
Coherence Tomography Angiography
- URL: http://arxiv.org/abs/2304.08282v3
- Date: Wed, 3 May 2023 13:37:54 GMT
- Title: Deep-Learning-based Vasculature Extraction for Single-Scan Optical
Coherence Tomography Angiography
- Authors: Jinpeng Liao, Tianyu Zhang, Yilong Zhang, Chunhui Li, Zhihong Huang
- Abstract summary: We propose a vasculature extraction pipeline that uses only one-repeated OCT scan to generate OCTA images.
The pipeline is based on the proposed Vasculature Extraction Transformer (VET), which leverages convolutional projection to better learn the spatial relationships between image patches.
- Score: 9.77526300425824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical coherence tomography angiography (OCTA) is a non-invasive imaging
modality that extends the functionality of OCT by extracting moving red blood
cell signals from surrounding static biological tissues. OCTA has emerged as a
valuable tool for analyzing skin microvasculature, enabling more accurate
diagnosis and treatment monitoring. Most existing OCTA extraction algorithms,
such as speckle variance (SV)- and eigen-decomposition (ED)-OCTA, implement a
larger number of repeated (NR) OCT scans at the same position to produce
high-quality angiography images. However, a higher NR requires a longer data
acquisition time, leading to more unpredictable motion artifacts. In this
study, we propose a vasculature extraction pipeline that uses only one-repeated
OCT scan to generate OCTA images. The pipeline is based on the proposed
Vasculature Extraction Transformer (VET), which leverages convolutional
projection to better learn the spatial relationships between image patches. In
comparison to OCTA images obtained via the SV-OCTA (PSNR: 17.809) and ED-OCTA
(PSNR: 18.049) using four-repeated OCT scans, OCTA images extracted by VET
exhibit moderate quality (PSNR: 17.515) and higher image contrast while
reducing the required data acquisition time from ~8 s to ~2 s. Based on visual
observations, the proposed VET outperforms SV and ED algorithms when using neck
and face OCTA data in areas that are challenging to scan. This study represents
that the VET has the capacity to extract vascularture images from a fast
one-repeated OCT scan, facilitating accurate diagnosis for patients.
Related papers
- Quantitative Characterization of Retinal Features in Translated OCTA [0.6664270117164767]
This study explores the feasibility of using generative machine learning (ML) to translate Optical Coherence Tomography ( OCT) images into Optical Coherence Tomography Angiography ( OCTA) images.
arXiv Detail & Related papers (2024-04-24T18:40:45Z) - WIA-LD2ND: Wavelet-based Image Alignment for Self-supervised Low-Dose CT Denoising [74.14134385961775]
We introduce a novel self-supervised CT image denoising method called WIA-LD2ND, only using NDCT data.
WIA-LD2ND comprises two modules: Wavelet-based Image Alignment (WIA) and Frequency-Aware Multi-scale Loss (FAM)
arXiv Detail & Related papers (2024-03-18T11:20:11Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Feature-oriented Deep Learning Framework for Pulmonary Cone-beam CT
(CBCT) Enhancement with Multi-task Customized Perceptual Loss [9.59233136691378]
Cone-beam computed tomography (CBCT) is routinely collected during image-guided radiation therapy.
Recent deep learning-based CBCT enhancement methods have shown promising results in suppressing artifacts.
We propose a novel feature-oriented deep learning framework that translates low-quality CBCT images into high-quality CT-like imaging.
arXiv Detail & Related papers (2023-11-01T10:09:01Z) - Robust Implementation of Foreground Extraction and Vessel Segmentation
for X-ray Coronary Angiography Image Sequence [4.653742319057035]
The extraction of contrast-filled vessels from X-ray coronary angiography(XCA) image sequence has important clinical significance.
We propose a novel method for vessel layer extraction based on tensor robust principal component analysis(TRPCA)
For the vessel images with uneven contrast distribution, a two-stage region growth(TSRG) method is utilized for vessel enhancement and segmentation.
arXiv Detail & Related papers (2022-09-15T12:07:09Z) - Multi-scale reconstruction of undersampled spectral-spatial OCT data for
coronary imaging using deep learning [1.8359410255568984]
Intravascular optical coherence tomography (IV OCT) has been considered as an optimal imagining system for the diagnosis and treatment of coronary artery disease (CAD)
There is a trade-off between high spatial resolution and fast scanning rate for coronary imaging.
We propose a viable spectral-spatial acquisition method that down-scales the sampling process in both spectral and spatial domain.
arXiv Detail & Related papers (2022-04-25T16:37:25Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Blood vessel segmentation in en-face OCTA images: a frequency based
method [3.6055028453181013]
We present a novel method for the vessel identification based on frequency representations of the image.
The algorithm is evaluated on an OCTA image data set from $10$ eyes acquired by a Cirrus HD- OCT device.
arXiv Detail & Related papers (2021-09-13T16:42:58Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z) - Multifold Acceleration of Diffusion MRI via Slice-Interleaved Diffusion
Encoding (SIDE) [50.65891535040752]
We propose a diffusion encoding scheme, called Slice-Interleaved Diffusion.
SIDE, that interleaves each diffusion-weighted (DW) image volume with slices encoded with different diffusion gradients.
We also present a method based on deep learning for effective reconstruction of DW images from the highly slice-undersampled data.
arXiv Detail & Related papers (2020-02-25T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.