A Spatiotemporal Illumination Model for 3D Image Fusion in Optical
Coherence Tomography
- URL: http://arxiv.org/abs/2402.12114v1
- Date: Mon, 19 Feb 2024 13:08:31 GMT
- Title: A Spatiotemporal Illumination Model for 3D Image Fusion in Optical
Coherence Tomography
- Authors: Stefan Ploner, Jungeun Won, Julia Schottenhamml, Jessica Girgis,
Kenneth Lam, Nadia Waheed, James Fujimoto, Andreas Maier
- Abstract summary: Optical-scanning standard tomography ( OCT) is a non-invasive, micrometer-scale imaging modality in ophthalmology.
We present a novel parametrization that exploits continuity in sequentially-scanned volume data.
In 68 volumes from eyes with pathology showed reduction of illumination artifacts in 88% of the data, and only 6% moderate illumination artifacts.
- Score: 2.8532140618225337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optical coherence tomography (OCT) is a non-invasive, micrometer-scale
imaging modality that has become a clinical standard in ophthalmology. By
raster-scanning the retina, sequential cross-sectional image slices are
acquired to generate volumetric data. In-vivo imaging suffers from
discontinuities between slices that show up as motion and illumination
artifacts. We present a new illumination model that exploits continuity in
orthogonally raster-scanned volume data. Our novel spatiotemporal
parametrization adheres to illumination continuity both temporally, along the
imaged slices, as well as spatially, in the transverse directions. Yet, our
formulation does not make inter-slice assumptions, which could have
discontinuities. This is the first optimization of a 3D inverse model in an
image reconstruction context in OCT. Evaluation in 68 volumes from eyes with
pathology showed reduction of illumination artifacts in 88\% of the data, and
only 6\% showed moderate residual illumination artifacts. The method enables
the use of forward-warped motion corrected data, which is more accurate, and
enables supersampling and advanced 3D image reconstruction in OCT.
Related papers
- FCDM: Sparse-view Sinogram Inpainting with Frequency Domain Convolution Enhanced Diffusion Models [14.043383277622874]
We introduce a novel diffusion-based inpainting framework tailored for sinogram data.
FCDM significantly outperforms existing methods, achieving SSIM over 0.95 and PSNR above 30 dB, with improvements of up to 33% in SSIM and 29% in PSNR compared to baselines.
arXiv Detail & Related papers (2024-08-26T12:31:38Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - Gradient Descent Provably Solves Nonlinear Tomographic Reconstruction [60.95625458395291]
In computed tomography (CT) the forward model consists of a linear transform followed by an exponential nonlinearity based on the attenuation of light according to the Beer-Lambert Law.
We show that this approach reduces metal artifacts compared to a commercial reconstruction of a human skull with metal crowns.
arXiv Detail & Related papers (2023-10-06T00:47:57Z) - SdCT-GAN: Reconstructing CT from Biplanar X-Rays with Self-driven
Generative Adversarial Networks [6.624839896733912]
This paper presents a new self-driven generative adversarial network model (SdCT-GAN) for reconstruction of 3D CT images.
It is motivated to pay more attention to image details by introducing a novel auto-encoder structure in the discriminator.
LPIPS evaluation metric is adopted that can quantitatively evaluate the fine contours and textures of reconstructed images better than the existing ones.
arXiv Detail & Related papers (2023-09-10T08:16:02Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - A Spatiotemporal Model for Precise and Efficient Fully-automatic 3D
Motion Correction in OCT [10.550562752812894]
OCT instruments image by-scanning a focused light spot across the retina, acquiring cross-sectional images to generate data.
Patient eye motion during the acquisition poses unique challenges: non-rigid, distorted distortions occur, leading to gaps in data.
We present a new distortion model and a corresponding fully-automatic, reference-free optimization strategy for computational robustness.
arXiv Detail & Related papers (2022-09-15T11:48:53Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Zero-Shot Learning of Continuous 3D Refractive Index Maps from Discrete
Intensity-Only Measurements [5.425568744312016]
We present DeCAF as the first NF-based IDT method that can learn a high-quality continuous representation of a RI volume directly from its intensity-only and limited-angle measurements.
We show on three different IDT modalities and multiple biological samples that DeCAF can generate high-contrast and artifact-free RI maps.
arXiv Detail & Related papers (2021-11-27T06:05:47Z) - 3D Reconstruction of Curvilinear Structures with Stereo Matching
DeepConvolutional Neural Networks [52.710012864395246]
We propose a fully automated pipeline for both detection and matching of curvilinear structures in stereo pairs.
We mainly focus on 3D reconstruction of dislocations from stereo pairs of TEM images.
arXiv Detail & Related papers (2021-10-14T23:05:47Z) - Automatic 2D-3D Registration without Contrast Agent during Neurovascular
Interventions [0.34376560669160383]
Using live fluoroscopy images with a 3D rotational reconstruction of the vasculature allows to navigate endovascular devices in minimally invasive neuro-vascular treatment.
Image-based registration algorithm relies on gradients in the image (bone structures, sinuses) as landmark features.
The paper establishes a new method for validation of 2D-3D registration without requiring changes to the clinical workflow.
arXiv Detail & Related papers (2021-06-08T20:16:04Z) - Tattoo tomography: Freehand 3D photoacoustic image reconstruction with
an optical pattern [49.240017254888336]
Photoacoustic tomography (PAT) is a novel imaging technique that can resolve both morphological and functional tissue properties.
A current drawback is the limited field-of-view provided by the conventionally applied 2D probes.
We present a novel approach to 3D reconstruction of PAT data that does not require an external tracking system.
arXiv Detail & Related papers (2020-11-10T09:27:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.