Empowering Low-Light Image Enhancer through Customized Learnable Priors
- URL: http://arxiv.org/abs/2309.01958v1
- Date: Tue, 5 Sep 2023 05:20:11 GMT
- Title: Empowering Low-Light Image Enhancer through Customized Learnable Priors
- Authors: Naishan Zheng, Man Zhou, Yanmeng Dong, Xiangyu Rui, Jie Huang, Chongyi
Li, Feng Zhao
- Abstract summary: We propose a paradigm for low-light image enhancement that explores the potential of customized learnable priors.
Motivated by the powerful feature representation capability of Masked Autoencoder (MAE), we customize MAE-based illumination and noise priors.
- Score: 40.83461757842304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have achieved remarkable progress in enhancing low-light
images by improving their brightness and eliminating noise. However, most
existing methods construct end-to-end mapping networks heuristically,
neglecting the intrinsic prior of image enhancement task and lacking
transparency and interpretability. Although some unfolding solutions have been
proposed to relieve these issues, they rely on proximal operator networks that
deliver ambiguous and implicit priors. In this work, we propose a paradigm for
low-light image enhancement that explores the potential of customized learnable
priors to improve the transparency of the deep unfolding paradigm. Motivated by
the powerful feature representation capability of Masked Autoencoder (MAE), we
customize MAE-based illumination and noise priors and redevelop them from two
perspectives: 1) \textbf{structure flow}: we train the MAE from a normal-light
image to its illumination properties and then embed it into the proximal
operator design of the unfolding architecture; and m2) \textbf{optimization
flow}: we train MAE from a normal-light image to its gradient representation
and then employ it as a regularization term to constrain noise in the model
output. These designs improve the interpretability and representation
capability of the model.Extensive experiments on multiple low-light image
enhancement datasets demonstrate the superiority of our proposed paradigm over
state-of-the-art methods. Code is available at
https://github.com/zheng980629/CUE.
Related papers
- Unsupervised Low-light Image Enhancement with Lookup Tables and Diffusion Priors [38.96909959677438]
Low-light image enhancement (LIE) aims at precisely and efficiently recovering an image degraded in poor illumination environments.
Recent advanced LIE techniques are using deep neural networks, which require lots of low-normal light image pairs, network parameters, and computational resources.
We devise a novel unsupervised LIE framework based on diffusion priors and lookup tables to achieve efficient low-light image recovery.
arXiv Detail & Related papers (2024-09-27T16:37:27Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Zero-Reference Low-Light Enhancement via Physical Quadruple Priors [58.77377454210244]
We propose a new zero-reference low-light enhancement framework trainable solely with normal light images.
This framework is able to restore our illumination-invariant prior back to images, automatically achieving low-light enhancement.
arXiv Detail & Related papers (2024-03-19T17:36:28Z) - PROMPT-IML: Image Manipulation Localization with Pre-trained Foundation
Models Through Prompt Tuning [35.39822183728463]
We present a novel Prompt-IML framework for detecting tampered images.
Humans tend to discern authenticity of an image based on semantic and high-frequency information.
Our model can achieve better performance on eight typical fake image datasets.
arXiv Detail & Related papers (2024-01-01T03:45:07Z) - Dual Degradation-Inspired Deep Unfolding Network for Low-Light Image
Enhancement [3.4929041108486185]
We propose a Dual degrAdation-inSpired deep Unfolding network, termed DASUNet, for low-light image enhancement.
It learns two distinct image priors via considering degradation specificity between luminance and chrominance spaces.
Our source code and pretrained model will be publicly available.
arXiv Detail & Related papers (2023-08-05T03:07:11Z) - Learning Semantic-Aware Knowledge Guidance for Low-Light Image
Enhancement [69.47143451986067]
Low-light image enhancement (LLIE) investigates how to improve illumination and produce normal-light images.
The majority of existing methods improve low-light images via a global and uniform manner, without taking into account the semantic information of different regions.
We propose a novel semantic-aware knowledge-guided framework that can assist a low-light enhancement model in learning rich and diverse priors encapsulated in a semantic segmentation model.
arXiv Detail & Related papers (2023-04-14T10:22:28Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - Physically Inspired Dense Fusion Networks for Relighting [45.66699760138863]
We propose a model which enriches neural networks with physical insight.
Our method generates the relighted image with new illumination settings via two different strategies.
We show that our proposal can outperform many state-of-the-art methods in terms of well-known fidelity metrics and perceptual loss.
arXiv Detail & Related papers (2021-05-05T17:33:45Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.