Retinex Image Enhancement Based on Sequential Decomposition With a
Plug-and-Play Framework
- URL: http://arxiv.org/abs/2210.05436v1
- Date: Tue, 11 Oct 2022 13:29:10 GMT
- Title: Retinex Image Enhancement Based on Sequential Decomposition With a
Plug-and-Play Framework
- Authors: Tingting Wu, Wenna Wu, Ying Yang, Feng-Lei Fan, Tieyong Zeng
- Abstract summary: We design a plug-and-play framework based on the Retinex theory for simultaneously image enhancement and noise removal.
Our framework outcompetes the state-of-the-art methods in both image enhancement and denoising.
- Score: 16.579397398441102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Retinex model is one of the most representative and effective methods for
low-light image enhancement. However, the Retinex model does not explicitly
tackle the noise problem, and shows unsatisfactory enhancing results. In recent
years, due to the excellent performance, deep learning models have been widely
used in low-light image enhancement. However, these methods have two
limitations: i) The desirable performance can only be achieved by deep learning
when a large number of labeled data are available. However, it is not easy to
curate massive low/normal-light paired data; ii) Deep learning is notoriously a
black-box model [1]. It is difficult to explain their inner-working mechanism
and understand their behaviors. In this paper, using a sequential Retinex
decomposition strategy, we design a plug-and-play framework based on the
Retinex theory for simultaneously image enhancement and noise removal.
Meanwhile, we develop a convolutional neural network-based (CNN-based) denoiser
into our proposed plug-and-play framework to generate a reflectance component.
The final enhanced image is produced by integrating the illumination and
reflectance with gamma correction. The proposed plug-and-play framework can
facilitate both post hoc and ad hoc interpretability. Extensive experiments on
different datasets demonstrate that our framework outcompetes the
state-of-the-art methods in both image enhancement and denoising.
Related papers
- RSEND: Retinex-based Squeeze and Excitation Network with Dark Region Detection for Efficient Low Light Image Enhancement [1.7356500114422735]
We propose a more accurate, concise, and one-stage Retinex theory based framework, RSEND.
RSEND first divides the low-light image into the illumination map and reflectance map, then captures the important details in the illumination map and performs light enhancement.
Our Efficient Retinex model significantly outperforms other CNN-based models, achieving a PSNR improvement ranging from 0.44 dB to 4.2 dB in different datasets.
arXiv Detail & Related papers (2024-06-14T01:36:52Z) - DI-Retinex: Digital-Imaging Retinex Theory for Low-Light Image Enhancement [73.57965762285075]
We propose a new expression called Digital-Imaging Retinex theory (DI-Retinex) through theoretical and experimental analysis of Retinex theory in digital imaging.
Our proposed method outperforms all existing unsupervised methods in terms of visual quality, model size, and speed.
arXiv Detail & Related papers (2024-04-04T09:53:00Z) - Zero-Shot Enhancement of Low-Light Image Based on Retinex Decomposition [4.175396687130961]
We propose a new learning-based Retinex decomposition of zero-shot low-light enhancement method, called ZERRINNet.
Our method is a zero-reference enhancement method that is not affected by the training data of paired and unpaired datasets.
arXiv Detail & Related papers (2023-11-06T09:57:48Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - Retinexformer: One-stage Retinex-based Transformer for Low-light Image
Enhancement [96.09255345336639]
We formulate a principled One-stage Retinex-based Framework (ORF) to enhance low-light images.
ORF first estimates the illumination information to light up the low-light image and then restores the corruption to produce the enhanced image.
Our algorithm, Retinexformer, significantly outperforms state-of-the-art methods on thirteen benchmarks.
arXiv Detail & Related papers (2023-03-12T16:54:08Z) - Plug-and-Play Image Restoration with Deep Denoiser Prior [186.84724418955054]
We show that a denoiser can implicitly serve as the image prior for model-based methods to solve many inverse problems.
We set up a benchmark deep denoiser prior by training a highly flexible and effective CNN denoiser.
We then plug the deep denoiser prior as a modular part into a half quadratic splitting based iterative algorithm to solve various image restoration problems.
arXiv Detail & Related papers (2020-08-31T17:18:58Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.