LoLI-Street: Benchmarking Low-Light Image Enhancement and Beyond
- URL: http://arxiv.org/abs/2410.09831v1
- Date: Sun, 13 Oct 2024 13:11:56 GMT
- Title: LoLI-Street: Benchmarking Low-Light Image Enhancement and Beyond
- Authors: Md Tanvir Islam, Inzamamul Alam, Simon S. Woo, Saeed Anwar, IK Hyun Lee, Khan Muhammad,
- Abstract summary: We introduce a new dataset LoLI-Street (Low-Light Images of Streets) with 33k paired low-light and well-exposed images from street scenes in developed cities.
LoLI-Street dataset also features 1,000 real low-light test images for testing LLIE models under real-life conditions.
- Score: 37.47964043913622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement (LLIE) is essential for numerous computer vision tasks, including object detection, tracking, segmentation, and scene understanding. Despite substantial research on improving low-quality images captured in underexposed conditions, clear vision remains critical for autonomous vehicles, which often struggle with low-light scenarios, signifying the need for continuous research. However, paired datasets for LLIE are scarce, particularly for street scenes, limiting the development of robust LLIE methods. Despite using advanced transformers and/or diffusion-based models, current LLIE methods struggle in real-world low-light conditions and lack training on street-scene datasets, limiting their effectiveness for autonomous vehicles. To bridge these gaps, we introduce a new dataset LoLI-Street (Low-Light Images of Streets) with 33k paired low-light and well-exposed images from street scenes in developed cities, covering 19k object classes for object detection. LoLI-Street dataset also features 1,000 real low-light test images for testing LLIE models under real-life conditions. Furthermore, we propose a transformer and diffusion-based LLIE model named "TriFuse". Leveraging the LoLI-Street dataset, we train and evaluate our TriFuse and SOTA models to benchmark on our dataset. Comparing various models, our dataset's generalization feasibility is evident in testing across different mainstream datasets by significantly enhancing images and object detection for practical applications in autonomous driving and surveillance systems. The complete code and dataset is available on https://github.com/tanvirnwu/TriFuse.
Related papers
- HUE Dataset: High-Resolution Event and Frame Sequences for Low-Light Vision [16.432164340779266]
We introduce the HUE dataset, a collection of high-resolution event and frame sequences captured in low-light conditions.
Our dataset includes 106 sequences, encompassing indoor, cityscape, twilight, night, driving, and controlled scenarios.
We employ both qualitative and quantitative evaluations to assess state-of-the-art low-light enhancement and event-based image reconstruction methods.
arXiv Detail & Related papers (2024-10-24T21:15:15Z) - BVI-RLV: A Fully Registered Dataset and Benchmarks for Low-Light Video Enhancement [56.97766265018334]
This paper introduces a low-light video dataset, consisting of 40 scenes with various motion scenarios under two distinct low-lighting conditions.
We provide fully registered ground truth data captured in normal light using a programmable motorized dolly and refine it via an image-based approach for pixel-wise frame alignment across different light levels.
Our experimental results demonstrate the significance of fully registered video pairs for low-light video enhancement (LLVE) and the comprehensive evaluation shows that the models trained with our dataset outperform those trained with the existing datasets.
arXiv Detail & Related papers (2024-07-03T22:41:49Z) - XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
The dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters.
We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - LARD - Landing Approach Runway Detection -- Dataset for Vision Based
Landing [2.7400353551392853]
We present a dataset of high-quality aerial images for the task of runway detection during approach and landing phases.
Most of the dataset is composed of synthetic images but we also provide manually labelled images from real landing footages.
This dataset paves the way for further research such as the analysis of dataset quality or the development of models to cope with the detection tasks.
arXiv Detail & Related papers (2023-04-05T08:25:55Z) - Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and
Transformer-Based Method [51.30748775681917]
We consider the task of low-light image enhancement (LLIE) and introduce a large-scale database consisting of images at 4K and 8K resolution.
We conduct systematic benchmarking studies and provide a comparison of current LLIE algorithms.
As a second contribution, we introduce LLFormer, a transformer-based low-light enhancement method.
arXiv Detail & Related papers (2022-12-22T09:05:07Z) - SIMBAR: Single Image-Based Scene Relighting For Effective Data
Augmentation For Automated Driving Vision Tasks [2.974889834426778]
This paper presents a novel image-based relighting pipeline, SIMBAR, that can work with a single image as input.
To the best of our knowledge, there is no prior work on scene relighting leveraging explicit geometric representations from a single image.
To further validate and effectively quantify the benefit of leveraging SIMBAR for data augmentation for automated driving vision tasks, object detection and tracking experiments are conducted.
arXiv Detail & Related papers (2022-04-01T18:11:43Z) - Lighting the Darkness in the Deep Learning Era [118.35081853500411]
Low-light image enhancement (LLIE) aims at improving the perception or interpretability of an image captured in an environment with poor illumination.
Recent advances in this area are dominated by deep learning-based solutions.
We provide a comprehensive survey to cover various aspects ranging from algorithm taxonomy to unsolved open issues.
arXiv Detail & Related papers (2021-04-21T19:12:19Z) - Stereo Matching by Self-supervision of Multiscopic Vision [65.38359887232025]
We propose a new self-supervised framework for stereo matching utilizing multiple images captured at aligned camera positions.
A cross photometric loss, an uncertainty-aware mutual-supervision loss, and a new smoothness loss are introduced to optimize the network.
Our model obtains better disparity maps than previous unsupervised methods on the KITTI dataset.
arXiv Detail & Related papers (2021-04-09T02:58:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.