Learning Enhancement From Degradation: A Diffusion Model For Fundus
Image Enhancement
- URL: http://arxiv.org/abs/2303.04603v1
- Date: Wed, 8 Mar 2023 14:14:49 GMT
- Title: Learning Enhancement From Degradation: A Diffusion Model For Fundus
Image Enhancement
- Authors: Puijin Cheng and Li Lin and Yijin Huang and Huaqing He and Wenhan Luo
and Xiaoying Tang
- Abstract summary: We introduce a novel diffusion model based framework, named Learning Enhancement from Degradation (LED)
LED learns degradation mappings from unpaired high-quality to low-quality images.
LED is able to output enhancement results that maintain clinically important features with better clarity.
- Score: 21.91300560770087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The quality of a fundus image can be compromised by numerous factors, many of
which are challenging to be appropriately and mathematically modeled. In this
paper, we introduce a novel diffusion model based framework, named Learning
Enhancement from Degradation (LED), for enhancing fundus images. Specifically,
we first adopt a data-driven degradation framework to learn degradation
mappings from unpaired high-quality to low-quality images. We then apply a
conditional diffusion model to learn the inverse enhancement process in a
paired manner. The proposed LED is able to output enhancement results that
maintain clinically important features with better clarity. Moreover, in the
inference phase, LED can be easily and effectively integrated with any existing
fundus image enhancement framework. We evaluate the proposed LED on several
downstream tasks with respect to various clinically-relevant metrics,
successfully demonstrating its superiority over existing state-of-the-art
methods both quantitatively and qualitatively. The source code is available at
https://github.com/QtacierP/LED.
Related papers
- Learning Efficient and Effective Trajectories for Differential Equation-based Image Restoration [59.744840744491945]
We reformulate the trajectory optimization of this kind of method, focusing on enhancing both reconstruction quality and efficiency.
We propose cost-aware trajectory distillation to streamline complex paths into several manageable steps with adaptable sizes.
Experiments showcase the significant superiority of the proposed method, achieving a maximum PSNR improvement of 2.1 dB over state-of-the-art methods.
arXiv Detail & Related papers (2024-10-07T07:46:08Z) - Enhanced Control for Diffusion Bridge in Image Restoration [4.480905492503335]
A special type of diffusion bridge model has achieved more advanced results in image restoration.
This paper introduces the ECDB model enhancing the control of the diffusion bridge with low-quality images as conditions.
Experimental results prove that the ECDB model has achieved state-of-the-art results in many image restoration tasks.
arXiv Detail & Related papers (2024-08-29T07:09:33Z) - UIE-UnFold: Deep Unfolding Network with Color Priors and Vision Transformer for Underwater Image Enhancement [27.535028176427623]
Underwater image enhancement (UIE) plays a crucial role in various marine applications.
Current learning-based approaches frequently lack explicit prior knowledge about the physical processes involved in underwater image formation.
This paper proposes a novel deep unfolding network (DUN) for UIE that integrates color priors and inter-stage feature incorporation.
arXiv Detail & Related papers (2024-08-20T08:48:33Z) - Efficient Degradation-aware Any Image Restoration [83.92870105933679]
We propose textitDaAIR, an efficient All-in-One image restorer employing a Degradation-aware Learner (DaLe) in the low-rank regime.
By dynamically allocating model capacity to input degradations, we realize an efficient restorer integrating holistic and specific learning.
arXiv Detail & Related papers (2024-05-24T11:53:27Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Zero-LED: Zero-Reference Lighting Estimation Diffusion Model for Low-Light Image Enhancement [2.9873893715462185]
We propose a novel zero-reference lighting estimation diffusion model for low-light image enhancement called Zero-LED.
It utilizes the stable convergence ability of diffusion models to bridge the gap between low-light domains and real normal-light domains.
It successfully alleviates the dependence on pairwise training data via zero-reference learning.
arXiv Detail & Related papers (2024-03-05T11:39:17Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.