Visual Perception Model for Rapid and Adaptive Low-light Image
Enhancement
- URL: http://arxiv.org/abs/2005.07343v1
- Date: Fri, 15 May 2020 03:47:10 GMT
- Title: Visual Perception Model for Rapid and Adaptive Low-light Image
Enhancement
- Authors: Xiaoxiao Li, Xiaopeng Guo, Liye Mei, Mingyu Shang, Jie Gao, Maojing
Shu, and Xiang Wang
- Abstract summary: Low-light image enhancement is a promising solution to tackle the problem of insufficient sensitivity of human vision system (HVS) to perceive information in low light environments.
Previous Retinex-based works always accomplish enhancement task by estimating light intensity.
We propose a visual perception (VP) model to acquire a precise mathematical description of visual perception.
- Score: 19.955016389978926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement is a promising solution to tackle the problem of
insufficient sensitivity of human vision system (HVS) to perceive information
in low light environments. Previous Retinex-based works always accomplish
enhancement task by estimating light intensity. Unfortunately, single light
intensity modelling is hard to accurately simulate visual perception
information, leading to the problems of imbalanced visual photosensitivity and
weak adaptivity. To solve these problems, we explore the precise relationship
between light source and visual perception and then propose the visual
perception (VP) model to acquire a precise mathematical description of visual
perception. The core of VP model is to decompose the light source into light
intensity and light spatial distribution to describe the perception process of
HVS, offering refinement estimation of illumination and reflectance. To reduce
complexity of the estimation process, we introduce the rapid and adaptive
$\mathbf{\beta}$ and $\mathbf{\gamma}$ functions to build an illumination and
reflectance estimation scheme. Finally, we present a optimal determination
strategy, consisting of a \emph{cycle operation} and a \emph{comparator}.
Specifically, the \emph{comparator} is responsible for determining the optimal
enhancement results from multiple enhanced results through implementing the
\emph{cycle operation}. By coordinating the proposed VP model, illumination and
reflectance estimation scheme, and the optimal determination strategy, we
propose a rapid and adaptive framework for low-light image enhancement.
Extensive experiment results demenstrate that the proposed method achieves
better performance in terms of visual comparison, quantitative assessment, and
computational efficiency, compared with the currently state-of-the-arts.
Related papers
- Unsupervised Low-light Image Enhancement with Lookup Tables and Diffusion Priors [38.96909959677438]
Low-light image enhancement (LIE) aims at precisely and efficiently recovering an image degraded in poor illumination environments.
Recent advanced LIE techniques are using deep neural networks, which require lots of low-normal light image pairs, network parameters, and computational resources.
We devise a novel unsupervised LIE framework based on diffusion priors and lookup tables to achieve efficient low-light image recovery.
arXiv Detail & Related papers (2024-09-27T16:37:27Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Visibility Enhancement for Low-light Hazy Scenarios [18.605784907840473]
Low-light hazy scenes commonly appear at dusk and early morning.
We propose a novel method to enhance visibility for low-light hazy scenarios.
The framework is designed for enhancing visibility of the input image via fully utilizing the clues from different sub-tasks.
The simulation is designed for generating the dataset with ground-truths by the proposed low-light hazy imaging model.
arXiv Detail & Related papers (2023-08-01T15:07:38Z) - LUT-GCE: Lookup Table Global Curve Estimation for Fast Low-light Image
Enhancement [62.17015413594777]
We present an effective and efficient approach for low-light image enhancement, named LUT-GCE.
We estimate a global curve for the entire image that allows corrections for both under- and over-exposure.
Our approach outperforms the state of the art in terms of inference speed, especially on high-definition images (e.g., 1080p and 4k)
arXiv Detail & Related papers (2023-06-12T12:53:06Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - Deep Quantigraphic Image Enhancement via Comparametric Equations [15.782217616496055]
We propose a novel trainable module that diversifies the conversion from the low-light image and illumination map to the enhanced image.
Our method improves the flexibility of deep image enhancement, limits the computational burden to illumination estimation, and allows for fully unsupervised learning adaptable to the diverse demands of different tasks.
arXiv Detail & Related papers (2023-04-05T08:14:41Z) - Self-Aligned Concave Curve: Illumination Enhancement for Unsupervised
Adaptation [36.050270650417325]
We propose a learnable illumination enhancement model for high-level vision.
Inspired by real camera response functions, we assume that the illumination enhancement function should be a concave curve.
Our model architecture and training designs mutually benefit each other, forming a powerful unsupervised normal-to-low light adaptation framework.
arXiv Detail & Related papers (2022-10-07T19:32:55Z) - Sparse Needlets for Lighting Estimation with Spherical Transport Loss [89.52531416604774]
NeedleLight is a new lighting estimation model that represents illumination with needlets and allows lighting estimation in both frequency domain and spatial domain jointly.
Extensive experiments show that NeedleLight achieves superior lighting estimation consistently across multiple evaluation metrics as compared with state-of-the-art methods.
arXiv Detail & Related papers (2021-06-24T15:19:42Z) - GMLight: Lighting Estimation via Geometric Distribution Approximation [86.95367898017358]
This paper presents a lighting estimation framework that employs a regression network and a generative projector for effective illumination estimation.
We parameterize illumination scenes in terms of the geometric light distribution, light intensity, ambient term, and auxiliary depth, and estimate them as a pure regression task.
With the estimated lighting parameters, the generative projector synthesizes panoramic illumination maps with realistic appearance and frequency.
arXiv Detail & Related papers (2021-02-20T03:31:52Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.