Simplifying Low-Light Image Enhancement Networks with Relative Loss
Functions
- URL: http://arxiv.org/abs/2304.02978v2
- Date: Fri, 4 Aug 2023 02:29:24 GMT
- Title: Simplifying Low-Light Image Enhancement Networks with Relative Loss
Functions
- Authors: Yu Zhang, Xiaoguang Di, Junde Wu, Rao Fu, Yong Li, Yue Wang, Yanwu Xu,
Guohui Yang, Chunhui Wang
- Abstract summary: We introduce FLW-Net (Fast and LightWeight Network) and two relative loss functions to make learning easier in low-light image enhancement.
We first recognize the challenges of the need for a large receptive field to obtain global contrast.
Then, we propose an efficient global feature information extraction component and two loss functions based on relative information to overcome these challenges.
- Score: 14.63586364951471
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image enhancement is a common technique used to mitigate issues such as
severe noise, low brightness, low contrast, and color deviation in low-light
images. However, providing an optimal high-light image as a reference for
low-light image enhancement tasks is impossible, which makes the learning
process more difficult than other image processing tasks. As a result, although
several low-light image enhancement methods have been proposed, most of them
are either too complex or insufficient in addressing all the issues in
low-light images. In this paper, to make the learning easier in low-light image
enhancement, we introduce FLW-Net (Fast and LightWeight Network) and two
relative loss functions. Specifically, we first recognize the challenges of the
need for a large receptive field to obtain global contrast and the lack of an
absolute reference, which limits the simplification of network structures in
this task. Then, we propose an efficient global feature information extraction
component and two loss functions based on relative information to overcome
these challenges. Finally, we conducted comparative experiments to demonstrate
the effectiveness of the proposed method, and the results confirm that the
proposed method can significantly reduce the complexity of supervised low-light
image enhancement networks while improving processing effect. The code is
available at \url{https://github.com/hitzhangyu/FLW-Net}.
Related papers
- CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Zero-Shot Enhancement of Low-Light Image Based on Retinex Decomposition [4.175396687130961]
We propose a new learning-based Retinex decomposition of zero-shot low-light enhancement method, called ZERRINNet.
Our method is a zero-reference enhancement method that is not affected by the training data of paired and unpaired datasets.
arXiv Detail & Related papers (2023-11-06T09:57:48Z) - LRT: An Efficient Low-Light Restoration Transformer for Dark Light Field
Images [9.926231893220063]
Recent learning-based methods for low-light enhancement have their own disadvantages.
We propose an efficient Low-light Restoration Transformer (LRT) for LF images.
We show that our method can achieve superior performance on the restoration of extremely low-light and noisy LF images.
arXiv Detail & Related papers (2022-09-06T03:23:58Z) - Low-light Image Enhancement by Retinex Based Algorithm Unrolling and
Adjustment [50.13230641857892]
We propose a new deep learning framework for the low-light image enhancement (LIE) problem.
The proposed framework contains a decomposition network inspired by algorithm unrolling, and adjustment networks considering both global brightness and local brightness sensitivity.
Experiments on a series of typical LIE datasets demonstrated the effectiveness of the proposed method, both quantitatively and visually, as compared with existing methods.
arXiv Detail & Related papers (2022-02-12T03:59:38Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z) - Burst Photography for Learning to Enhance Extremely Dark Images [19.85860245798819]
In this paper, we aim to leverage burst photography to boost the performance and obtain much sharper and more accurate RGB images from extremely dark raw images.
The backbone of our proposed framework is a novel coarse-to-fine network architecture that generates high-quality outputs progressively.
Our experiments demonstrate that our approach leads to perceptually more pleasing results than the state-of-the-art methods by producing more detailed and considerably higher quality images.
arXiv Detail & Related papers (2020-06-17T13:19:07Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z) - Burst Denoising of Dark Images [19.85860245798819]
We propose a deep learning framework for obtaining clean and colorful RGB images from extremely dark raw images.
The backbone of our framework is a novel coarse-to-fine network architecture that generates high-quality outputs in a progressive manner.
Our experiments demonstrate that the proposed approach leads to perceptually more pleasing results than state-of-the-art methods.
arXiv Detail & Related papers (2020-03-17T17:17:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.