Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging
- URL: http://arxiv.org/abs/2305.18361v1
- Date: Sat, 27 May 2023 03:55:19 GMT
- Title: Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging
- Authors: Yiqian Wang, Alexandra Warter, Melina Cavichini, Varsha Alex, Dirk-Uwe
G. Bartsch, William R. Freeman, Truong Q. Nguyen, Cheolhong An
- Abstract summary: We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
- Score: 65.47834983591957
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Optical Coherence Tomography (OCT) is one of the most important retinal
imaging technique. However, involuntary motion artifacts still pose a major
challenge in OCT imaging that compromises the quality of downstream analysis,
such as retinal layer segmentation and OCT Angiography. We propose deep
learning based neural networks to correct axial and coronal motion artifacts in
OCT based on a single volumetric scan. The proposed method consists of two
fully-convolutional neural networks that predict Z and X dimensional
displacement maps sequentially in two stages. The experimental result shows
that the proposed method can effectively correct motion artifacts and achieve
smaller error than other methods. Specifically, the method can recover the
overall curvature of the retina, and can be generalized well to various
diseases and resolutions.
Related papers
- TomoGRAF: A Robust and Generalizable Reconstruction Network for Single-View Computed Tomography [3.1209855614927275]
Traditional analytical/iterative CT reconstruction algorithms require hundreds of angular data samplings.
We develop a novel TomoGRAF framework incorporating the unique X-ray transportation physics to reconstruct high-quality 3D volumes.
arXiv Detail & Related papers (2024-11-12T20:07:59Z) - Explicit Differentiable Slicing and Global Deformation for Cardiac Mesh Reconstruction [8.730291904586656]
Mesh reconstruction of the cardiac anatomy from medical images is useful for shape and motion measurements and biophysics simulations.
Traditional voxel-based approaches rely on pre- and post-processing that compromises image fidelity.
We propose a novel explicit differentiable voxelization and slicing (DVS) algorithm that allows gradient backpropagation to a mesh from its slices.
arXiv Detail & Related papers (2024-09-03T17:19:31Z) - OCTCube: A 3D foundation model for optical coherence tomography that improves cross-dataset, cross-disease, cross-device and cross-modality analysis [11.346324975034051]
OCTCube is a 3D foundation model pre-trained on 26,605 3D OCT volumes encompassing 1.62 million 2D OCT images.
It outperforms 2D models when predicting 8 retinal diseases in both inductive and cross-dataset settings.
It also shows superior performance on cross-device prediction and when predicting systemic diseases, such as diabetes and hypertension.
arXiv Detail & Related papers (2024-08-20T22:55:19Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - DAN-Net: Dual-Domain Adaptive-Scaling Non-local Network for CT Metal
Artifact Reduction [15.225899631788973]
Metal implants can heavily attenuate X-rays in computed tomography (CT) scans, leading to severe artifacts in reconstructed images.
Several network models have been proposed for metal artifact reduction (MAR) in CT.
We present a novel Dual-domain Adaptive-scaling Non-local network (DAN-Net) for MAR.
arXiv Detail & Related papers (2021-02-16T08:09:16Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z) - Deep OCT Angiography Image Generation for Motion Artifact Suppression [8.442020709975015]
Affected scans emerge as high intensity (white) or missing (black) regions, resulting in lost information.
Deep generative model for OCT to OCTA image translation relies on a single intact OCT scan.
A U-Net is trained to extract the angiographic information from OCT patches.
At inference, a detection algorithm finds outlier OCTA scans based on their surroundings, which are then replaced by the trained network.
arXiv Detail & Related papers (2020-01-08T13:31:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.