Illuminating Darkness: Learning to Enhance Low-light Images In-the-Wild
- URL: http://arxiv.org/abs/2503.06898v2
- Date: Tue, 22 Jul 2025 03:10:47 GMT
- Title: Illuminating Darkness: Learning to Enhance Low-light Images In-the-Wild
- Authors: S M A Sharif, Abdur Rehman, Zain Ul Abidin, Fayaz Ali Dharejo, Radu Timofte, Rizwan Ali Naqvi,
- Abstract summary: We introduce the Low-Light Smartphone dataset (LSD), a large-scale, high-resolution (4K+) dataset collected in the wild.<n>LSD contains 6,425 precisely aligned low and normal-light image pairs, selected from over 8,000 dynamic indoor and outdoor scenes.<n>We propose TFFormer, a hybrid model that encodes luminance and chrominance separately to reduce color-structure entanglement.
- Score: 47.39277249268179
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Single-shot low-light image enhancement (SLLIE) remains challenging due to the limited availability of diverse, real-world paired datasets. To bridge this gap, we introduce the Low-Light Smartphone Dataset (LSD), a large-scale, high-resolution (4K+) dataset collected in the wild across a wide range of challenging lighting conditions (0.1 to 200 lux). LSD contains 6,425 precisely aligned low and normal-light image pairs, selected from over 8,000 dynamic indoor and outdoor scenes through multi-frame acquisition and expert evaluation. To evaluate generalization and aesthetic quality, we collect 2,117 unpaired low-light images from previously unseen devices. To fully exploit LSD, we propose TFFormer, a hybrid model that encodes luminance and chrominance (LC) separately to reduce color-structure entanglement. We further propose a cross-attention-driven joint decoder for context-aware fusion of LC representations, along with LC refinement and LC-guided supervision to significantly enhance perceptual fidelity and structural consistency. TFFormer achieves state-of-the-art results on LSD (+2.45 dB PSNR) and substantially improves downstream vision tasks, such as low-light object detection (+6.80 mAP on ExDark).
Related papers
- Evaluating Low-Light Image Enhancement Across Multiple Intensity Levels [30.012366353273222]
We introduce the Multi-Illumination Low-Light dataset, containing images captured at diverse light intensities under controlled conditions.<n>We benchmark several state-of-the-art methods and reveal significant performance variations across intensity levels.<n>Our modifications achieve up to 10 dB PSNR improvement for DSLR and 2 dB for the smartphone on Full HD images.
arXiv Detail & Related papers (2025-11-19T14:52:51Z) - SAIGFormer: A Spatially-Adaptive Illumination-Guided Network for Low-Light Image Enhancement [58.79901582809091]
Recent Transformer-based low-light enhancement methods have made promising progress in recovering global illumination.<n>Recent Transformer-based low-light enhancement methods have made promising progress in recovering global illumination.<n>We present a Spatially-Adaptive Illumination-Guided Transformer framework that enables accurate illumination restoration.
arXiv Detail & Related papers (2025-07-21T11:38:56Z) - Towards Lightest Low-Light Image Enhancement Architecture for Mobile Devices [3.7651572719063178]
Real-time low-light image enhancement on mobile and embedded devices requires models that balance visual quality and computational efficiency.<n>We propose LiteIE, an ultra-lightweight unsupervised enhancement framework that eliminates dependence on large-scale supervision.<n> LiteIE runs at 30 FPS for 4K images with just 58 parameters, enabling real-time deployment on edge devices.
arXiv Detail & Related papers (2025-07-06T07:36:47Z) - Learning to See in the Extremely Dark [41.386150786725295]
We propose a paired-to-paired data synthesis pipeline capable of generating well-calibrated extremely low-light RAW images.<n>A large-scale paired dataset named See-in-the-Extremely-Dark (SIED) is used to benchmark low-light RAW image enhancement approaches.<n>A diffusion-based framework is proposed to restore visually pleasing results from extremely low-SNR RAW inputs.
arXiv Detail & Related papers (2025-06-26T10:24:07Z) - SEE: See Everything Every Time -- Adaptive Brightness Adjustment for Broad Light Range Images via Events [53.79905461386883]
Event cameras, with a high dynamic range exceeding $120dB$, significantly outperform traditional embedded cameras.<n>We propose a novel research question: how to employ events to enhance and adaptively adjust the brightness of images captured under broad lighting conditions.<n>Our framework captures color through sensor patterns, uses cross-attention to model events as a brightness dictionary, and adjusts the image's dynamic range to form a broad light-range representation.
arXiv Detail & Related papers (2025-02-28T14:55:37Z) - LUMINA-Net: Low-light Upgrade through Multi-stage Illumination and Noise Adaptation Network for Image Enhancement [26.585985828583304]
Low-light image enhancement (LLIE) is a crucial task in computer vision aimed at enhancing the visual fidelity of images captured under low-illumination conditions.<n>We propose LUMINA-Net, an unsupervised deep learning framework that learns adaptive priors from low-light image pairs by integrating multi-stage illumination and reflectance modules.
arXiv Detail & Related papers (2025-02-21T03:37:58Z) - Deep Joint Unrolling for Deblurring and Low-Light Image Enhancement (JUDE) [5.013248430919224]
JUDE is a Deep Joint Unrolling for Deblurring and Low-Light Image Enhancement.<n>Based on Retinex theory and the blurring model, the low-light blurry input is iteratively deblurred and decomposed.<n>We incorporate various modules to estimate the initial blur kernel, enhance brightness, and eliminate noise in the final image.
arXiv Detail & Related papers (2024-12-10T14:03:41Z) - BVI-RLV: A Fully Registered Dataset and Benchmarks for Low-Light Video Enhancement [56.97766265018334]
This paper introduces a low-light video dataset, consisting of 40 scenes with various motion scenarios under two distinct low-lighting conditions.
We provide fully registered ground truth data captured in normal light using a programmable motorized dolly and refine it via an image-based approach for pixel-wise frame alignment across different light levels.
Our experimental results demonstrate the significance of fully registered video pairs for low-light video enhancement (LLVE) and the comprehensive evaluation shows that the models trained with our dataset outperform those trained with the existing datasets.
arXiv Detail & Related papers (2024-07-03T22:41:49Z) - Towards Robust Event-guided Low-Light Image Enhancement: A Large-Scale Real-World Event-Image Dataset and Novel Approach [7.974102031202597]
We propose a real-world (indoor and outdoor) dataset comprising over 30K pairs of images and events under both low and normal illumination conditions.
Based on the dataset, we propose a novel event-guided LIE approach, called EvLight, towards robust performance in real-world low-light scenes.
arXiv Detail & Related papers (2024-04-01T00:18:17Z) - BVI-Lowlight: Fully Registered Benchmark Dataset for Low-Light Video Enhancement [44.1973928137492]
This paper introduces a novel low-light video dataset, consisting of 40 scenes in various motion scenarios under two low-lighting conditions.
We provide fully registered ground truth data captured in normal light using a programmable motorized dolly.
We refine them via image-based post-processing to ensure the pixel-wise alignment of frames in different light levels.
arXiv Detail & Related papers (2024-02-03T00:40:22Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - CERL: A Unified Optimization Framework for Light Enhancement with
Realistic Noise [81.47026986488638]
Low-light images captured in the real world are inevitably corrupted by sensor noise.
Existing light enhancement methods either overlook the important impact of real-world noise during enhancement, or treat noise removal as a separate pre- or post-processing step.
We present Coordinated Enhancement for Real-world Low-light Noisy Images (CERL), that seamlessly integrates light enhancement and noise suppression parts into a unified and physics-grounded framework.
arXiv Detail & Related papers (2021-08-01T15:31:15Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - L^2UWE: A Framework for the Efficient Enhancement of Low-Light
Underwater Images Using Local Contrast and Multi-Scale Fusion [84.11514688735183]
We present a novel single-image low-light underwater image enhancer, L2UWE, that builds on our observation that an efficient model of atmospheric lighting can be derived from local contrast information.
A multi-scale fusion process is employed to combine these images while emphasizing regions of higher luminance, saliency and local contrast.
We demonstrate the performance of L2UWE by using seven metrics to test it against seven state-of-the-art enhancement methods specific to underwater and low-light scenes.
arXiv Detail & Related papers (2020-05-28T01:57:32Z) - Extreme Low-Light Imaging with Multi-granulation Cooperative Networks [18.438827277749525]
Low-light imaging is challenging since images may appear to be dark and noised due to low signal-to-noise ratio, complex image content, and variety in shooting scenes in extreme low-light condition.
Many methods have been proposed to enhance the imaging quality under extreme low-light conditions, but it remains difficult to obtain satisfactory results.
arXiv Detail & Related papers (2020-05-16T14:26:06Z) - VIDIT: Virtual Image Dataset for Illumination Transfer [18.001635516017902]
We present a novel dataset, the Virtual Image dataset for Illumination Transfer (VIDIT)
VIDIT contains 300 virtual scenes used for training, where every scene is captured 40 times in total: from 8 equally-spaced azimuthal angles, each lit with 5 different illuminants.
arXiv Detail & Related papers (2020-05-11T21:58:03Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.