Diff-Retinex: Rethinking Low-light Image Enhancement with A Generative
Diffusion Model
- URL: http://arxiv.org/abs/2308.13164v1
- Date: Fri, 25 Aug 2023 04:03:41 GMT
- Title: Diff-Retinex: Rethinking Low-light Image Enhancement with A Generative
Diffusion Model
- Authors: Xunpeng Yi, Han Xu, Hao Zhang, Linfeng Tang, Jiayi Ma
- Abstract summary: We propose a physically explainable and generative diffusion model for low-light image enhancement, termed as Diff-Retinex.
In the Retinex decomposition, we integrate the superiority of attention in Transformer to decompose the image into illumination and reflectance maps.
Then, we design multi-path generative diffusion networks to reconstruct the normal-light Retinex probability distribution and solve the various degradations in these components respectively.
- Score: 28.762205397922294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we rethink the low-light image enhancement task and propose a
physically explainable and generative diffusion model for low-light image
enhancement, termed as Diff-Retinex. We aim to integrate the advantages of the
physical model and the generative network. Furthermore, we hope to supplement
and even deduce the information missing in the low-light image through the
generative network. Therefore, Diff-Retinex formulates the low-light image
enhancement problem into Retinex decomposition and conditional image
generation. In the Retinex decomposition, we integrate the superiority of
attention in Transformer and meticulously design a Retinex Transformer
decomposition network (TDN) to decompose the image into illumination and
reflectance maps. Then, we design multi-path generative diffusion networks to
reconstruct the normal-light Retinex probability distribution and solve the
various degradations in these components respectively, including dark
illumination, noise, color deviation, loss of scene contents, etc. Owing to
generative diffusion model, Diff-Retinex puts the restoration of low-light
subtle detail into practice. Extensive experiments conducted on real-world
low-light datasets qualitatively and quantitatively demonstrate the
effectiveness, superiority, and generalization of the proposed method.
Related papers
- LightenDiffusion: Unsupervised Low-Light Image Enhancement with Latent-Retinex Diffusion Models [39.28266945709169]
We propose a diffusion-based unsupervised framework that incorporates physically explainable Retinex theory with diffusion models for low-light image enhancement.
Experiments on publicly available real-world benchmarks show that the proposed LightenDiffusion outperforms state-of-the-art unsupervised competitors.
arXiv Detail & Related papers (2024-07-12T02:54:43Z) - DI-Retinex: Digital-Imaging Retinex Theory for Low-Light Image Enhancement [73.57965762285075]
We propose a new expression called Digital-Imaging Retinex theory (DI-Retinex) through theoretical and experimental analysis of Retinex theory in digital imaging.
Our proposed method outperforms all existing unsupervised methods in terms of visual quality, model size, and speed.
arXiv Detail & Related papers (2024-04-04T09:53:00Z) - Reti-Diff: Illumination Degradation Image Restoration with Retinex-based
Latent Diffusion Model [59.08821399652483]
Illumination degradation image restoration (IDIR) techniques aim to improve the visibility of degraded images and mitigate the adverse effects of deteriorated illumination.
Among these algorithms, diffusion model (DM)-based methods have shown promising performance but are often burdened by heavy computational demands and pixel misalignment issues when predicting the image-level distribution.
We propose to leverage DM within a compact latent space to generate concise guidance priors and introduce a novel solution called Reti-Diff for the IDIR task.
Reti-Diff comprises two key components: the Retinex-based latent DM (RLDM) and the Retinex-guided transformer (RG
arXiv Detail & Related papers (2023-11-20T09:55:06Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Unsupervised Low Light Image Enhancement Using SNR-Aware Swin
Transformer [0.0]
Low-light image enhancement aims at improving brightness and contrast, and reducing noise that corrupts the visual quality.
We propose a dual-branch network based on Swin Transformer, guided by a signal-to-noise ratio prior map.
arXiv Detail & Related papers (2023-06-03T11:07:56Z) - Retinexformer: One-stage Retinex-based Transformer for Low-light Image
Enhancement [96.09255345336639]
We formulate a principled One-stage Retinex-based Framework (ORF) to enhance low-light images.
ORF first estimates the illumination information to light up the low-light image and then restores the corruption to produce the enhanced image.
Our algorithm, Retinexformer, significantly outperforms state-of-the-art methods on thirteen benchmarks.
arXiv Detail & Related papers (2023-03-12T16:54:08Z) - Invertible Rescaling Network and Its Extensions [118.72015270085535]
In this work, we propose a novel invertible framework to model the bidirectional degradation and restoration from a new perspective.
We develop invertible models to generate valid degraded images and transform the distribution of lost contents.
Then restoration is made tractable by applying the inverse transformation on the generated degraded image together with a randomly-drawn latent variable.
arXiv Detail & Related papers (2022-10-09T06:58:58Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - DA-DRN: Degradation-Aware Deep Retinex Network for Low-Light Image
Enhancement [14.75902042351609]
We propose a Degradation-Aware Deep Retinex Network (denoted as DA-DRN) for low-light image enhancement and tackle the above degradation.
Based on Retinex Theory, the decomposition net in our model can decompose low-light images into reflectance and illumination maps.
We conduct extensive experiments to demonstrate that our approach achieves a promising effect with good rubustness and generalization.
arXiv Detail & Related papers (2021-10-05T03:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.