Learning to Adapt to Light
- URL: http://arxiv.org/abs/2202.08098v1
- Date: Wed, 16 Feb 2022 14:36:25 GMT
- Title: Learning to Adapt to Light
- Authors: Kai-Fu Yang, Cheng Cheng, Shi-Xuan Zhao, Xian-Shi Zhang, Yong-Jie Li
- Abstract summary: We propose a biologically inspired method to handle light-related image-enhancement tasks with a unified network (called LA-Net)
A new module is built inspired by biological visual adaptation to achieve unified light adaptation in the low-frequency pathway.
Experiments on three tasks -- low-light enhancement, exposure correction, and tone mapping -- demonstrate that the proposed method almost obtains state-of-the-art performance.
- Score: 14.919947487248653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light adaptation or brightness correction is a key step in improving the
contrast and visual appeal of an image. There are multiple light-related tasks
(for example, low-light enhancement and exposure correction) and previous
studies have mainly investigated these tasks individually. However, it is
interesting to consider whether these light-related tasks can be executed by a
unified model, especially considering that our visual system adapts to external
light in such way. In this study, we propose a biologically inspired method to
handle light-related image-enhancement tasks with a unified network (called
LA-Net). First, a frequency-based decomposition module is designed to decouple
the common and characteristic sub-problems of light-related tasks into two
pathways. Then, a new module is built inspired by biological visual adaptation
to achieve unified light adaptation in the low-frequency pathway. In addition,
noise suppression or detail enhancement is achieved effectively in the
high-frequency pathway regardless of the light levels. Extensive experiments on
three tasks -- low-light enhancement, exposure correction, and tone mapping --
demonstrate that the proposed method almost obtains state-of-the-art
performance compared with recent methods designed for these individual tasks.
Related papers
- CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - Joint Correcting and Refinement for Balanced Low-Light Image Enhancement [26.399356992450763]
A novel structure is proposed which can balance brightness, color, and illumination more effectively.
Joint Correcting and Refinement Network (JCRNet) mainly consists of three stages to balance brightness, color, and illumination of enhancement.
arXiv Detail & Related papers (2023-09-28T03:16:45Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z) - Simplifying Low-Light Image Enhancement Networks with Relative Loss
Functions [14.63586364951471]
We introduce FLW-Net (Fast and LightWeight Network) and two relative loss functions to make learning easier in low-light image enhancement.
We first recognize the challenges of the need for a large receptive field to obtain global contrast.
Then, we propose an efficient global feature information extraction component and two loss functions based on relative information to overcome these challenges.
arXiv Detail & Related papers (2023-04-06T10:05:54Z) - Learning with Nested Scene Modeling and Cooperative Architecture Search
for Low-Light Vision [95.45256938467237]
Images captured from low-light scenes often suffer from severe degradations.
Deep learning methods have been proposed to enhance the visual quality of low-light images.
It is still challenging to extend these enhancement techniques to handle other Low-Light Vision applications.
arXiv Detail & Related papers (2021-12-09T06:08:31Z) - Progressive Joint Low-light Enhancement and Noise Removal for Raw Images [10.778200442212334]
Low-light imaging on mobile devices is typically challenging due to insufficient incident light coming through the relatively small aperture.
We propose a low-light image processing framework that performs joint illumination adjustment, color enhancement, and denoising.
Our framework does not need to recollect massive data when being adapted to another camera model.
arXiv Detail & Related papers (2021-06-28T16:43:52Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.