Low-Light Image and Video Enhancement: A Comprehensive Survey and Beyond
- URL: http://arxiv.org/abs/2212.10772v5
- Date: Mon, 1 Jan 2024 05:40:17 GMT
- Title: Low-Light Image and Video Enhancement: A Comprehensive Survey and Beyond
- Authors: Shen Zheng, Yiling Ma, Jinqian Pan, Changjie Lu, Gaurav Gupta
- Abstract summary: This paper presents a comprehensive survey of low-light image and video enhancement, addressing two primary challenges in the field.
The first challenge is the prevalence of mixed over-/under-exposed images, which are not adequately addressed by existing methods.
The second challenge is the scarcity of suitable low-light video datasets for training and testing.
- Score: 8.355226305081835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a comprehensive survey of low-light image and video
enhancement, addressing two primary challenges in the field. The first
challenge is the prevalence of mixed over-/under-exposed images, which are not
adequately addressed by existing methods. In response, this work introduces two
enhanced variants of the SICE dataset: SICE_Grad and SICE_Mix, designed to
better represent these complexities. The second challenge is the scarcity of
suitable low-light video datasets for training and testing. To address this,
the paper introduces the Night Wenzhou dataset, a large-scale, high-resolution
video collection that features challenging fast-moving aerial scenes and
streetscapes with varied illuminations and degradation. This study also
conducts an extensive analysis of key techniques and performs comparative
experiments using the proposed and current benchmark datasets. The survey
concludes by highlighting emerging applications, discussing unresolved
challenges, and suggesting future research directions within the LLIE
community. The datasets are available at
https://github.com/ShenZheng2000/LLIE_Survey.
Related papers
- HUE Dataset: High-Resolution Event and Frame Sequences for Low-Light Vision [16.432164340779266]
We introduce the HUE dataset, a collection of high-resolution event and frame sequences captured in low-light conditions.
Our dataset includes 106 sequences, encompassing indoor, cityscape, twilight, night, driving, and controlled scenarios.
We employ both qualitative and quantitative evaluations to assess state-of-the-art low-light enhancement and event-based image reconstruction methods.
arXiv Detail & Related papers (2024-10-24T21:15:15Z) - BVI-RLV: A Fully Registered Dataset and Benchmarks for Low-Light Video Enhancement [56.97766265018334]
This paper introduces a low-light video dataset, consisting of 40 scenes with various motion scenarios under two distinct low-lighting conditions.
We provide fully registered ground truth data captured in normal light using a programmable motorized dolly and refine it via an image-based approach for pixel-wise frame alignment across different light levels.
Our experimental results demonstrate the significance of fully registered video pairs for low-light video enhancement (LLVE) and the comprehensive evaluation shows that the models trained with our dataset outperform those trained with the existing datasets.
arXiv Detail & Related papers (2024-07-03T22:41:49Z) - Learning Exposure Correction in Dynamic Scenes [24.302307771649232]
We construct the first real-world paired video dataset, including both underexposure and overexposure dynamic scenes.
We propose an end-to-end video exposure correction network, in which a dual-stream module is designed to deal with both underexposure and overexposure factors.
arXiv Detail & Related papers (2024-02-27T08:19:51Z) - BVI-Lowlight: Fully Registered Benchmark Dataset for Low-Light Video Enhancement [44.1973928137492]
This paper introduces a novel low-light video dataset, consisting of 40 scenes in various motion scenarios under two low-lighting conditions.
We provide fully registered ground truth data captured in normal light using a programmable motorized dolly.
We refine them via image-based post-processing to ensure the pixel-wise alignment of frames in different light levels.
arXiv Detail & Related papers (2024-02-03T00:40:22Z) - Hybrid-Supervised Dual-Search: Leveraging Automatic Learning for
Loss-free Multi-Exposure Image Fusion [60.221404321514086]
Multi-exposure image fusion (MEF) has emerged as a prominent solution to address the limitations of digital imaging in representing varied exposure levels.
This paper presents a Hybrid-Supervised Dual-Search approach for MEF, dubbed HSDS-MEF, which introduces a bi-level optimization search scheme for automatic design of both network structures and loss functions.
arXiv Detail & Related papers (2023-09-03T08:07:26Z) - Few-shot Partial Multi-view Learning [103.33865779721458]
We propose a new task called few-shot partial multi-view learning.
It focuses on overcoming the negative impact of the view-missing issue in the low-data regime.
We conduct extensive experiments to evaluate our method.
arXiv Detail & Related papers (2021-05-05T13:34:43Z) - NTIRE 2021 Challenge on Quality Enhancement of Compressed Video: Dataset
and Study [95.36629866768999]
This paper introduces a novel dataset for video enhancement and studies the state-of-the-art methods of the NTIRE 2021 challenge.
The challenge is the first NTIRE challenge in this direction, with three competitions, hundreds of participants and tens of proposed solutions.
We find that the NTIRE 2021 challenge advances the state-of-the-art of quality enhancement on compressed video.
arXiv Detail & Related papers (2021-04-21T22:18:33Z) - Lighting the Darkness in the Deep Learning Era [118.35081853500411]
Low-light image enhancement (LLIE) aims at improving the perception or interpretability of an image captured in an environment with poor illumination.
Recent advances in this area are dominated by deep learning-based solutions.
We provide a comprehensive survey to cover various aspects ranging from algorithm taxonomy to unsolved open issues.
arXiv Detail & Related papers (2021-04-21T19:12:19Z) - Illumination Estimation Challenge: experience of past two years [57.13714732760851]
The 2nd Illumination estimation challenge( IEC#2) was conducted.
The challenge had several tracks: general, indoor, and two-illuminant with each of them focusing on different parameters of the scenes.
Other main features of it are a new large dataset of images (about 5000) taken with the same camera sensor model, a manual markup accompanying each image, diverse content with scenes taken in numerous countries under a huge variety of illuminations extracted by using the SpyderCube calibration object, and a contest-like markup for the images from the Cube+ dataset that was used in IEC#1.
arXiv Detail & Related papers (2020-12-31T17:59:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.