Unpaired Optical Coherence Tomography Angiography Image Super-Resolution
via Frequency-Aware Inverse-Consistency GAN
- URL: http://arxiv.org/abs/2309.17269v1
- Date: Fri, 29 Sep 2023 14:19:51 GMT
- Title: Unpaired Optical Coherence Tomography Angiography Image Super-Resolution
via Frequency-Aware Inverse-Consistency GAN
- Authors: Weiwen Zhang, Dawei Yang, Haoxuan Che, An Ran Ran, Carol Y. Cheung,
and Hao Chen
- Abstract summary: We propose a Generative Adversarial Network (GAN)-based unpaired super-resolution method for OCTA images.
To facilitate a precise spectrum of the reconstructed image, we also propose a frequency-aware adversarial loss for the discriminator.
Experiments show that our method outperforms other state-of-the-art unpaired methods both quantitatively and visually.
- Score: 6.717440708401628
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For optical coherence tomography angiography (OCTA) images, a limited
scanning rate leads to a trade-off between field-of-view (FOV) and imaging
resolution. Although larger FOV images may reveal more parafoveal vascular
lesions, their application is greatly hampered due to lower resolution. To
increase the resolution, previous works only achieved satisfactory performance
by using paired data for training, but real-world applications are limited by
the challenge of collecting large-scale paired images. Thus, an unpaired
approach is highly demanded. Generative Adversarial Network (GAN) has been
commonly used in the unpaired setting, but it may struggle to accurately
preserve fine-grained capillary details, which are critical biomarkers for
OCTA. In this paper, our approach aspires to preserve these details by
leveraging the frequency information, which represents details as
high-frequencies ($\textbf{hf}$) and coarse-grained backgrounds as
low-frequencies ($\textbf{lf}$). In general, we propose a GAN-based unpaired
super-resolution method for OCTA images and exceptionally emphasize
$\textbf{hf}$ fine capillaries through a dual-path generator. To facilitate a
precise spectrum of the reconstructed image, we also propose a frequency-aware
adversarial loss for the discriminator and introduce a frequency-aware focal
consistency loss for end-to-end optimization. Experiments show that our method
outperforms other state-of-the-art unpaired methods both quantitatively and
visually.
Related papers
- Accelerating Diffusion for SAR-to-Optical Image Translation via Adversarial Consistency Distillation [5.234109158596138]
We propose a new training framework for SAR-to-optical image translation.
Our method employs consistency distillation to reduce iterative inference steps and integrates adversarial learning to ensure image clarity and minimize color shifts.
The results demonstrate that our approach significantly improves inference speed by 131 times while maintaining the visual quality of the generated images.
arXiv Detail & Related papers (2024-07-08T16:36:12Z) - UNICORN: Ultrasound Nakagami Imaging via Score Matching and Adaptation [59.91293113930909]
Nakagami imaging holds promise for visualizing and quantifying tissue scattering in ultrasound waves.
Existing methods struggle with optimal window size selection and suffer from estimator instability.
We propose a novel method called UNICORN that offers an accurate, closed-form estimator for Nakagami parameter estimation.
arXiv Detail & Related papers (2024-03-10T18:05:41Z) - Sub2Full: split spectrum to boost OCT despeckling without clean data [0.0]
We propose an innovative self-supervised strategy called Sub2Full (S2F) for OCT despeckling without clean data.
This approach works by acquiring two repeated B-scans, splitting the spectrum of the first repeat as a low-resolution input, and utilizing the full spectrum of the second repeat as the high-resolution target.
The proposed method was validated on vis- OCT retinal images visualizing sublaminar structures in outer retina and demonstrated superior performance over conventional Noise2Noise and Noise2Void schemes.
arXiv Detail & Related papers (2024-01-18T16:59:04Z) - Frequency-aware optical coherence tomography image super-resolution via
conditional generative adversarial neural network [0.3040864511503504]
We propose a frequency-aware super-resolution framework that integrates frequency-based modules and frequency-based loss function into a conditional generative adversarial network (cGAN)
We conducted a large-scale quantitative study from an existing coronary OCT dataset to demonstrate the superiority of our proposed framework over existing deep learning frameworks.
arXiv Detail & Related papers (2023-07-20T16:07:02Z) - Optical Coherence Tomography Image Enhancement via Block Hankelization
and Low Rank Tensor Network Approximation [29.767032203718866]
We propose a novel OCT super-resolution technique using Ring decomposition in the embedded space.
A new tensorization method based on a block Hankelization approach with overlapped patches, called overlapped patch Hankelization, has been proposed which allows us to employ Ring decomposition.
The Hankelization method enables us to better exploit inter connection of pixels and consequently achieve better super-resolution of images.
arXiv Detail & Related papers (2023-06-19T06:23:26Z) - Reference-based OCT Angiogram Super-resolution with Learnable Texture
Generation [11.58649188893076]
We propose a reference-based super-resolution (RefSR) framework to preserve the resolution of the OCT angiograms while increasing the scanning area.
Textures from the normal RefSR pipeline are used to train a learnable texture generator (LTG), which is designed to generate textures according to the input.
LTGNet has superior performance and robustness over state-of-the-art methods, indicating good reliability and promise in real-life deployment.
arXiv Detail & Related papers (2023-05-10T01:48:01Z) - On Measuring and Controlling the Spectral Bias of the Deep Image Prior [63.88575598930554]
The deep image prior has demonstrated the remarkable ability that untrained networks can address inverse imaging problems.
It requires an oracle to determine when to stop the optimization as the performance degrades after reaching a peak.
We study the deep image prior from a spectral bias perspective to address these problems.
arXiv Detail & Related papers (2021-07-02T15:10:42Z) - Deep Unfolded Recovery of Sub-Nyquist Sampled Ultrasound Image [94.42139459221784]
We propose a reconstruction method from sub-Nyquist samples in the time and spatial domain, that is based on unfolding the ISTA algorithm.
Our method allows reducing the number of array elements, sampling rate, and computational time while ensuring high quality imaging performance.
arXiv Detail & Related papers (2021-03-01T19:19:38Z) - Perception Consistency Ultrasound Image Super-resolution via
Self-supervised CycleGAN [63.49373689654419]
We propose a new perception consistency ultrasound image super-resolution (SR) method based on self-supervision and cycle generative adversarial network (CycleGAN)
We first generate the HR fathers and the LR sons of the test ultrasound LR image through image enhancement.
We then make full use of the cycle loss of LR-SR-LR and HR-LR-SR and the adversarial characteristics of the discriminator to promote the generator to produce better perceptually consistent SR results.
arXiv Detail & Related papers (2020-12-28T08:24:04Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z) - Hyperspectral-Multispectral Image Fusion with Weighted LASSO [68.04032419397677]
We propose an approach for fusing hyperspectral and multispectral images to provide high-quality hyperspectral output.
We demonstrate that the proposed sparse fusion and reconstruction provides quantitatively superior results when compared to existing methods on publicly available images.
arXiv Detail & Related papers (2020-03-15T23:07:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.