Learning Enhancement From Degradation: A Diffusion Model For Fundus
Image Enhancement
- URL: http://arxiv.org/abs/2303.04603v1
- Date: Wed, 8 Mar 2023 14:14:49 GMT
- Title: Learning Enhancement From Degradation: A Diffusion Model For Fundus
Image Enhancement
- Authors: Puijin Cheng and Li Lin and Yijin Huang and Huaqing He and Wenhan Luo
and Xiaoying Tang
- Abstract summary: We introduce a novel diffusion model based framework, named Learning Enhancement from Degradation (LED)
LED learns degradation mappings from unpaired high-quality to low-quality images.
LED is able to output enhancement results that maintain clinically important features with better clarity.
- Score: 21.91300560770087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The quality of a fundus image can be compromised by numerous factors, many of
which are challenging to be appropriately and mathematically modeled. In this
paper, we introduce a novel diffusion model based framework, named Learning
Enhancement from Degradation (LED), for enhancing fundus images. Specifically,
we first adopt a data-driven degradation framework to learn degradation
mappings from unpaired high-quality to low-quality images. We then apply a
conditional diffusion model to learn the inverse enhancement process in a
paired manner. The proposed LED is able to output enhancement results that
maintain clinically important features with better clarity. Moreover, in the
inference phase, LED can be easily and effectively integrated with any existing
fundus image enhancement framework. We evaluate the proposed LED on several
downstream tasks with respect to various clinically-relevant metrics,
successfully demonstrating its superiority over existing state-of-the-art
methods both quantitatively and qualitatively. The source code is available at
https://github.com/QtacierP/LED.
Related papers
- DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
We propose a novel IQA method called diffusion priors-based IQA (DP-IQA)
We use pre-trained stable diffusion as the backbone, extract multi-level features from the denoising U-Net, and decode them to estimate the image quality score.
We distill the knowledge in the above model into a CNN-based student model, significantly reducing the parameter to enhance applicability.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Efficient Degradation-aware Any Image Restoration [83.92870105933679]
We propose textitDaAIR, an efficient All-in-One image restorer employing a Degradation-aware Learner (DaLe) in the low-rank regime.
By dynamically allocating model capacity to input degradations, we realize an efficient restorer integrating holistic and specific learning.
arXiv Detail & Related papers (2024-05-24T11:53:27Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Zero-LED: Zero-Reference Lighting Estimation Diffusion Model for Low-Light Image Enhancement [2.9873893715462185]
We propose a novel zero-reference lighting estimation diffusion model for low-light image enhancement called Zero-LED.
It utilizes the stable convergence ability of diffusion models to bridge the gap between low-light domains and real normal-light domains.
It successfully alleviates the dependence on pairwise training data via zero-reference learning.
arXiv Detail & Related papers (2024-03-05T11:39:17Z) - Dual Degradation-Inspired Deep Unfolding Network for Low-Light Image
Enhancement [3.4929041108486185]
We propose a Dual degrAdation-inSpired deep Unfolding network, termed DASUNet, for low-light image enhancement.
It learns two distinct image priors via considering degradation specificity between luminance and chrominance spaces.
Our source code and pretrained model will be publicly available.
arXiv Detail & Related papers (2023-08-05T03:07:11Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.