DA-DRN: Degradation-Aware Deep Retinex Network for Low-Light Image
Enhancement
- URL: http://arxiv.org/abs/2110.01809v1
- Date: Tue, 5 Oct 2021 03:53:52 GMT
- Title: DA-DRN: Degradation-Aware Deep Retinex Network for Low-Light Image
Enhancement
- Authors: Xinxu Wei, Xianshi Zhang, Shisen Wang, Cheng Cheng, Yanlin Huang,
Kaifu Yang, and Yongjie Li
- Abstract summary: We propose a Degradation-Aware Deep Retinex Network (denoted as DA-DRN) for low-light image enhancement and tackle the above degradation.
Based on Retinex Theory, the decomposition net in our model can decompose low-light images into reflectance and illumination maps.
We conduct extensive experiments to demonstrate that our approach achieves a promising effect with good rubustness and generalization.
- Score: 14.75902042351609
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Images obtained in real-world low-light conditions are not only low in
brightness, but they also suffer from many other types of degradation, such as
color distortion, unknown noise, detail loss and halo artifacts. In this paper,
we propose a Degradation-Aware Deep Retinex Network (denoted as DA-DRN) for
low-light image enhancement and tackle the above degradation. Based on Retinex
Theory, the decomposition net in our model can decompose low-light images into
reflectance and illumination maps and deal with the degradation in the
reflectance during the decomposition phase directly. We propose a
Degradation-Aware Module (DA Module) which can guide the training process of
the decomposer and enable the decomposer to be a restorer during the training
phase without additional computational cost in the test phase. DA Module can
achieve the purpose of noise removal while preserving detail information into
the illumination map as well as tackle color distortion and halo artifacts. We
introduce Perceptual Loss to train the enhancement network to generate the
brightness-improved illumination maps which are more consistent with human
visual perception. We train and evaluate the performance of our proposed model
over the LOL real-world and LOL synthetic datasets, and we also test our model
over several other frequently used datasets without Ground-Truth (LIME, DICM,
MEF and NPE datasets). We conduct extensive experiments to demonstrate that our
approach achieves a promising effect with good rubustness and generalization
and outperforms many other state-of-the-art methods qualitatively and
quantitatively. Our method only takes 7 ms to process an image with 600x400
resolution on a TITAN Xp GPU.
Related papers
- RSEND: Retinex-based Squeeze and Excitation Network with Dark Region Detection for Efficient Low Light Image Enhancement [1.7356500114422735]
We propose a more accurate, concise, and one-stage Retinex theory based framework, RSEND.
RSEND first divides the low-light image into the illumination map and reflectance map, then captures the important details in the illumination map and performs light enhancement.
Our Efficient Retinex model significantly outperforms other CNN-based models, achieving a PSNR improvement ranging from 0.44 dB to 4.2 dB in different datasets.
arXiv Detail & Related papers (2024-06-14T01:36:52Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Retinexformer: One-stage Retinex-based Transformer for Low-light Image
Enhancement [96.09255345336639]
We formulate a principled One-stage Retinex-based Framework (ORF) to enhance low-light images.
ORF first estimates the illumination information to light up the low-light image and then restores the corruption to produce the enhanced image.
Our algorithm, Retinexformer, significantly outperforms state-of-the-art methods on thirteen benchmarks.
arXiv Detail & Related papers (2023-03-12T16:54:08Z) - KinD-LCE Curve Estimation And Retinex Fusion On Low-Light Image [7.280719886684936]
This paper proposes an algorithm for low illumination enhancement.
KinD-LCE uses a light curve estimation module to enhance the illumination map in the Retinex decomposed image.
An illumination map and reflection map fusion module were also proposed to restore the image details and reduce detail loss.
arXiv Detail & Related papers (2022-07-19T11:49:21Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - BLNet: A Fast Deep Learning Framework for Low-Light Image Enhancement
with Noise Removal and Color Restoration [14.75902042351609]
We propose a very fast deep learning framework called Bringing the Lightness (denoted as BLNet)
Based on Retinex Theory, the decomposition net in our model can decompose low-light images into reflectance and illumination.
We conduct extensive experiments to demonstrate that our approach achieves a promising effect with good rubustness and generalization.
arXiv Detail & Related papers (2021-06-30T10:06:16Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.