Adaptive Low Light Enhancement via Joint Global-Local Illumination Adjustment
- URL: http://arxiv.org/abs/2504.00400v1
- Date: Tue, 01 Apr 2025 03:46:28 GMT
- Title: Adaptive Low Light Enhancement via Joint Global-Local Illumination Adjustment
- Authors: Haodian Wang, Yaqi Song,
- Abstract summary: We propose a novel brightness-adaptive enhancement framework to tackle the challenge of local exposure inconsistencies in low-light images.<n>Our method achieves superior quantitative and qualitative results compared to state-of-the-art algorithms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Images captured under real-world low-light conditions face significant challenges due to uneven ambient lighting, making it difficult for existing end-to-end methods to enhance images with a large dynamic range to normal exposure levels. To address the above issue, we propose a novel brightness-adaptive enhancement framework designed to tackle the challenge of local exposure inconsistencies in real-world low-light images. Specifically, our proposed framework comprises two components: the Local Contrast Enhancement Network (LCEN) and the Global Illumination Guidance Network (GIGN). We introduce an early stopping mechanism in the LCEN and design a local discriminative module, which adaptively perceives the contrast of different areas in the image to control the premature termination of the enhancement process for patches with varying exposure levels. Additionally, within the GIGN, we design a global attention guidance module that effectively models global illumination by capturing long-range dependencies and contextual information within the image, which guides the local contrast enhancement network to significantly improve brightness across different regions. Finally, in order to coordinate the LCEN and GIGN, we design a novel training strategy to facilitate the training process. Experiments on multiple datasets demonstrate that our method achieves superior quantitative and qualitative results compared to state-of-the-art algorithms.
Related papers
- Brightness Perceiving for Recursive Low-Light Image Enhancement [8.926230015423624]
We propose a brightness-perceiving-based framework for high dynamic range low-light image enhancement.
Our framework consists of two parallel sub-networks: Adaptive Contrast and Texture enhancement network (ACT-Net) and Brightness Perception network (BP-Net)
Compared with eleven existing representative methods, the proposed method achieves new SOTA performance on six reference and no reference metrics.
arXiv Detail & Related papers (2025-04-03T07:53:33Z) - Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image Enhancement [59.17372460692809]
This work proposes a mean-teacher-based semi-supervised low-light enhancement (Semi-LLIE) framework that integrates the unpaired data into model training.
We introduce a semantic-aware contrastive loss to faithfully transfer the illumination distribution, contributing to enhancing images with natural colors.
We also propose novel perceptive loss based on the large-scale vision-language Recognize Anything Model (RAM) to help generate enhanced images with richer textual details.
arXiv Detail & Related papers (2024-09-25T04:05:32Z) - ALEN: A Dual-Approach for Uniform and Non-Uniform Low-Light Image Enhancement [10.957431540794836]
Inadequate illumination can lead to significant information loss and poor image quality, impacting various applications such as surveillance.<n>Current enhancement techniques often use specific datasets to enhance low-light images, but still present challenges when adapting to diverse real-world conditions.<n>The Adaptive Light Enhancement Network (ALEN) is introduced, whose main approach is the use of a classification mechanism to determine whether local or global illumination enhancement is required.
arXiv Detail & Related papers (2024-07-29T05:19:23Z) - A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Learning a Single Convolutional Layer Model for Low Light Image
Enhancement [43.411846299085575]
Low-light image enhancement (LLIE) aims to improve the illuminance of images due to insufficient light exposure.
A single convolutional layer model (SCLM) is proposed that provides global low-light enhancement as the coarsely enhanced results.
Experimental results demonstrate that the proposed method performs favorably against the state-of-the-art LLIE methods in both objective metrics and subjective visual effects.
arXiv Detail & Related papers (2023-05-23T13:12:00Z) - Cycle-Interactive Generative Adversarial Network for Robust Unsupervised
Low-Light Enhancement [109.335317310485]
Cycle-Interactive Generative Adversarial Network (CIGAN) is capable of not only better transferring illumination distributions between low/normal-light images but also manipulating detailed signals.
In particular, the proposed low-light guided transformation feed-forwards the features of low-light images from the generator of enhancement GAN into the generator of degradation GAN.
arXiv Detail & Related papers (2022-07-03T06:37:46Z) - Low Light Image Enhancement via Global and Local Context Modeling [164.85287246243956]
We introduce a context-aware deep network for low-light image enhancement.
First, it features a global context module that models spatial correlations to find complementary cues over full spatial domain.
Second, it introduces a dense residual block that captures local context with a relatively large receptive field.
arXiv Detail & Related papers (2021-01-04T09:40:54Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.