A Frequency-Aware Self-Supervised Learning for Ultra-Wide-Field Image Enhancement
- URL: http://arxiv.org/abs/2508.19664v1
- Date: Wed, 27 Aug 2025 08:24:20 GMT
- Title: A Frequency-Aware Self-Supervised Learning for Ultra-Wide-Field Image Enhancement
- Authors: Weicheng Liao, Zan Chen, Jianyang Xie, Yalin Zheng, Yuhui Ma, Yitian Zhao,
- Abstract summary: We propose a novel frequency-aware self-supervised learning method for UWF image enhancement.<n>It incorporates frequency-decoupled image deblurring and Retinex-guided illumination compensation modules.<n> Experimental results demonstrate that the proposed work not only enhances visualization quality but also improves disease diagnosis performance.
- Score: 18.34541880051962
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultra-Wide-Field (UWF) retinal imaging has revolutionized retinal diagnostics by providing a comprehensive view of the retina. However, it often suffers from quality-degrading factors such as blurring and uneven illumination, which obscure fine details and mask pathological information. While numerous retinal image enhancement methods have been proposed for other fundus imageries, they often fail to address the unique requirements in UWF, particularly the need to preserve pathological details. In this paper, we propose a novel frequency-aware self-supervised learning method for UWF image enhancement. It incorporates frequency-decoupled image deblurring and Retinex-guided illumination compensation modules. An asymmetric channel integration operation is introduced in the former module, so as to combine global and local views by leveraging high- and low-frequency information, ensuring the preservation of fine and broader structural details. In addition, a color preservation unit is proposed in the latter Retinex-based module, to provide multi-scale spatial and frequency information, enabling accurate illumination estimation and correction. Experimental results demonstrate that the proposed work not only enhances visualization quality but also improves disease diagnosis performance by restoring and correcting fine local details and uneven intensity. To the best of our knowledge, this work is the first attempt for UWF image enhancement, offering a robust and clinically valuable tool for improving retinal disease management.
Related papers
- Multi-Scale Target-Aware Representation Learning for Fundus Image Enhancement [11.652205644265893]
High-quality fundus images provide essential anatomical information for clinical screening and ophthalmic disease diagnosis.<n>Recent years have witnessed promising progress in fundus image enhancement.<n>We propose a multi-scale target-aware representation learning framework (MTRL-FIE) for efficient fundus image enhancement.
arXiv Detail & Related papers (2025-05-03T14:25:48Z) - Enhancing Fundus Image-based Glaucoma Screening via Dynamic Global-Local Feature Integration [26.715346685730484]
We propose a self-adaptive attention window that autonomously determines optimal boundaries for enhanced feature extraction.<n>We also introduce a multi-head attention mechanism to effectively fuse global and local features via feature linear readout.<n> Experimental results demonstrate that our method achieves superior accuracy and robustness in glaucoma classification.
arXiv Detail & Related papers (2025-04-01T05:28:14Z) - UWAFA-GAN: Ultra-Wide-Angle Fluorescein Angiography Transformation via Multi-scale Generation and Registration Enhancement [17.28459176559761]
UWF fluorescein angiography (UWF-FA) requires the administration of a fluorescent dye via injection into the patient's hand or elbow.
To mitigate potential adverse effects associated with injections, researchers have proposed the development of cross-modality medical image generation algorithms.
We introduce a novel conditional generative adversarial network (UWAFA-GAN) to synthesize UWF-FA from UWF-SLO.
arXiv Detail & Related papers (2024-05-01T14:27:43Z) - Unpaired Optical Coherence Tomography Angiography Image Super-Resolution via Frequency-Aware Inverse-Consistency GAN [6.717440708401628]
We propose a Generative Adversarial Network (GAN)-based unpaired super-resolution method for OCTA images.<n>To facilitate a precise spectrum of the reconstructed image, we also propose a frequency-aware adversarial loss for the discriminator.<n>Experiments show that our method outperforms other state-of-the-art unpaired methods both quantitatively and visually.
arXiv Detail & Related papers (2023-09-29T14:19:51Z) - AMLP:Adaptive Masking Lesion Patches for Self-supervised Medical Image
Segmentation [67.97926983664676]
Self-supervised masked image modeling has shown promising results on natural images.
However, directly applying such methods to medical images remains challenging.
We propose a novel self-supervised medical image segmentation framework, Adaptive Masking Lesion Patches (AMLP)
arXiv Detail & Related papers (2023-09-08T13:18:10Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - UWAT-GAN: Fundus Fluorescein Angiography Synthesis via Ultra-wide-angle
Transformation Multi-scale GAN [1.165405976310311]
Fundus photography is an essential examination for clinical and differential diagnosis of fundus diseases.
Current methods in fundus imaging could not produce high-resolution images and are unable to capture tiny vascular lesion areas.
This paper proposes a novel conditional generative adversarial network (UWAT-GAN) to synthesize UWF-FA from UWF-SLO.
arXiv Detail & Related papers (2023-07-21T12:23:39Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Bridging Synthetic and Real Images: a Transferable and Multiple
Consistency aided Fundus Image Enhancement Framework [61.74188977009786]
We propose an end-to-end optimized teacher-student framework to simultaneously conduct image enhancement and domain adaptation.
We also propose a novel multi-stage multi-attention guided enhancement network (MAGE-Net) as the backbones of our teacher and student network.
arXiv Detail & Related papers (2023-02-23T06:16:15Z) - NuI-Go: Recursive Non-Local Encoder-Decoder Network for Retinal Image
Non-Uniform Illumination Removal [96.12120000492962]
The quality of retinal images is often clinically unsatisfactory due to eye lesions and imperfect imaging process.
One of the most challenging quality degradation issues in retinal images is non-uniform illumination.
We propose a non-uniform illumination removal network for retinal image, called NuI-Go.
arXiv Detail & Related papers (2020-08-07T04:31:33Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.