Vessel-Promoted OCT to OCTA Image Translation by Heuristic Contextual Constraints
- URL: http://arxiv.org/abs/2303.06807v2
- Date: Wed, 21 Aug 2024 15:25:51 GMT
- Title: Vessel-Promoted OCT to OCTA Image Translation by Heuristic Contextual Constraints
- Authors: Shuhan Li, Dong Zhang, Xiaomeng Li, Chubin Ou, Lin An, Yanwu Xu, Kwang-Ting Cheng,
- Abstract summary: We introduce a novel method called TransPro to translate readily available 3D Optical Coherence Tomography images into 3D OCTA images.
Our TransPro method is primarily driven by two novel ideas that have been overlooked by prior work.
Experimental results on two datasets demonstrate that our TransPro outperforms state-of-the-art approaches.
- Score: 28.715207556565638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical Coherence Tomography Angiography (OCTA) is a crucial tool in the clinical screening of retinal diseases, allowing for accurate 3D imaging of blood vessels through non-invasive scanning. However, the hardware-based approach for acquiring OCTA images presents challenges due to the need for specialized sensors and expensive devices. In this paper, we introduce a novel method called TransPro, which can translate the readily available 3D Optical Coherence Tomography (OCT) images into 3D OCTA images without requiring any additional hardware modifications. Our TransPro method is primarily driven by two novel ideas that have been overlooked by prior work. The first idea is derived from a critical observation that the OCTA projection map is generated by averaging pixel values from its corresponding B-scans along the Z-axis. Hence, we introduce a hybrid architecture incorporating a 3D adversarial generative network and a novel Heuristic Contextual Guidance (HCG) module, which effectively maintains the consistency of the generated OCTA images between 3D volumes and projection maps. The second idea is to improve the vessel quality in the translated OCTA projection maps. As a result, we propose a novel Vessel Promoted Guidance (VPG) module to enhance the attention of network on retinal vessels. Experimental results on two datasets demonstrate that our TransPro outperforms state-of-the-art approaches, with relative improvements around 11.4% in MAE, 2.7% in PSNR, 2% in SSIM, 40% in VDE, and 9.1% in VDC compared to the baseline method. The code is available at: https://github.com/ustlsh/TransPro.
Related papers
- A Novel Coronary Artery Registration Method Based on Super-pixel Particle Swarm Optimization [2.991631700415871]
We propose a novel multimodal coronary artery image registration method based on a swarm optimization algorithm.<n>Our algorithm was evaluated on a pilot dataset of 28 pairs of XRA and CTA images from 10 patients who underwent PCI.
arXiv Detail & Related papers (2025-05-30T08:44:46Z) - MuTri: Multi-view Tri-alignment for OCT to OCTA 3D Image Translation [8.48045976269756]
We propose a multi-view Tri-alignment framework for OCT to OCTA 3D image translation in discrete and finite space, named MuTri.
We also collect the first large-scale dataset, namely, OCTA2024, which contains a pair of OCT and OCTA volumes from 846 subjects.
arXiv Detail & Related papers (2025-04-02T07:28:09Z) - OCTCube: A 3D foundation model for optical coherence tomography that improves cross-dataset, cross-disease, cross-device and cross-modality analysis [11.346324975034051]
OCTCube is a 3D foundation model pre-trained on 26,605 3D OCT volumes encompassing 1.62 million 2D OCT images.
It outperforms 2D models when predicting 8 retinal diseases in both inductive and cross-dataset settings.
It also shows superior performance on cross-device prediction and when predicting systemic diseases, such as diabetes and hypertension.
arXiv Detail & Related papers (2024-08-20T22:55:19Z) - GaSpCT: Gaussian Splatting for Novel CT Projection View Synthesis [0.6990493129893112]
GaSpCT is a novel view synthesis and 3D scene representation method used to generate novel projection views for Computer Tomography (CT) scans.
We adapt the Gaussian Splatting framework to enable novel view synthesis in CT based on limited sets of 2D image projections.
We evaluate the performance of our model using brain CT scans from the Parkinson's Progression Markers Initiative (PPMI) dataset.
arXiv Detail & Related papers (2024-04-04T00:28:50Z) - Accurate Patient Alignment without Unnecessary Imaging Dose via Synthesizing Patient-specific 3D CT Images from 2D kV Images [10.538839084727975]
Tumor visibility is constrained due to the projection of patient's anatomy onto a 2D plane.
In treatment room with 3D-OBI such as cone beam CT(CBCT), the field of view(FOV) of CBCT is limited with unnecessarily high imaging dose.
We propose a dual-models framework built with hierarchical ViT blocks to reconstruct 3D CT from kV images obtained at the treatment position.
arXiv Detail & Related papers (2024-04-01T19:55:03Z) - SdCT-GAN: Reconstructing CT from Biplanar X-Rays with Self-driven
Generative Adversarial Networks [6.624839896733912]
This paper presents a new self-driven generative adversarial network model (SdCT-GAN) for reconstruction of 3D CT images.
It is motivated to pay more attention to image details by introducing a novel auto-encoder structure in the discriminator.
LPIPS evaluation metric is adopted that can quantitatively evaluate the fine contours and textures of reconstructed images better than the existing ones.
arXiv Detail & Related papers (2023-09-10T08:16:02Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - 3D Vessel Reconstruction in OCT-Angiography via Depth Map Estimation [26.489218604637678]
Manual or automatic analysis of blood vessel in 2D OCTA images (en face angiograms) is commonly used in clinical practice.
We introduce a novel 3D vessel reconstruction framework based on the estimation of vessel depth maps from OCTA images.
arXiv Detail & Related papers (2021-02-26T16:53:39Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.