HUE Dataset: High-Resolution Event and Frame Sequences for Low-Light Vision
- URL: http://arxiv.org/abs/2410.19164v1
- Date: Thu, 24 Oct 2024 21:15:15 GMT
- Title: HUE Dataset: High-Resolution Event and Frame Sequences for Low-Light Vision
- Authors: Burak Ercan, Onur Eker, Aykut Erdem, Erkut Erdem,
- Abstract summary: We introduce the HUE dataset, a collection of high-resolution event and frame sequences captured in low-light conditions.
Our dataset includes 106 sequences, encompassing indoor, cityscape, twilight, night, driving, and controlled scenarios.
We employ both qualitative and quantitative evaluations to assess state-of-the-art low-light enhancement and event-based image reconstruction methods.
- Score: 16.432164340779266
- License:
- Abstract: Low-light environments pose significant challenges for image enhancement methods. To address these challenges, in this work, we introduce the HUE dataset, a comprehensive collection of high-resolution event and frame sequences captured in diverse and challenging low-light conditions. Our dataset includes 106 sequences, encompassing indoor, cityscape, twilight, night, driving, and controlled scenarios, each carefully recorded to address various illumination levels and dynamic ranges. Utilizing a hybrid RGB and event camera setup. we collect a dataset that combines high-resolution event data with complementary frame data. We employ both qualitative and quantitative evaluations using no-reference metrics to assess state-of-the-art low-light enhancement and event-based image reconstruction methods. Additionally, we evaluate these methods on a downstream object detection task. Our findings reveal that while event-based methods perform well in specific metrics, they may produce false positives in practical applications. This dataset and our comprehensive analysis provide valuable insights for future research in low-light vision and hybrid camera systems.
Related papers
- Super-resolving Real-world Image Illumination Enhancement: A New Dataset and A Conditional Diffusion Model [43.93772529301279]
We propose a SRRIIE dataset with an efficient conditional diffusion probabilistic models-based method.
We capture images using an ILDC camera and an optical zoom lens with exposure levels ranging from -6 EV to 0 EV and ISO levels ranging from 50 to 12800.
We show that most existing methods are less effective in preserving the structures and sharpness of restored images from complicated noises.
arXiv Detail & Related papers (2024-10-16T18:47:04Z) - QueensCAMP: an RGB-D dataset for robust Visual SLAM [0.0]
We introduce a novel RGB-D dataset designed for evaluating the robustness of VSLAM systems.
The dataset comprises real-world indoor scenes with dynamic objects, motion blur, and varying illumination.
We offer open-source scripts for injecting camera failures into any images, enabling further customization.
arXiv Detail & Related papers (2024-10-16T12:58:08Z) - Event-assisted Low-Light Video Object Segmentation [47.28027938310957]
Event cameras offer promise in enhancing object visibility and aiding VOS methods under such low-light conditions.
This paper introduces a pioneering framework tailored for low-light VOS, leveraging event camera data to elevate segmentation accuracy.
arXiv Detail & Related papers (2024-04-02T13:41:22Z) - Multi-Modal Dataset Acquisition for Photometrically Challenging Object [56.30027922063559]
This paper addresses the limitations of current datasets for 3D vision tasks in terms of accuracy, size, realism, and suitable imaging modalities for photometrically challenging objects.
We propose a novel annotation and acquisition pipeline that enhances existing 3D perception and 6D object pose datasets.
arXiv Detail & Related papers (2023-08-21T10:38:32Z) - A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and
Ground Truth [61.90504318229845]
This paper introduces the first paired real image benchmark dataset with hazy and haze-free images, and in-situ haze density measurements.
This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene.
A subset of this dataset has been used for the Object Detection in Haze Track of CVPR UG2 2022 challenge.
arXiv Detail & Related papers (2022-06-13T19:14:06Z) - NOD: Taking a Closer Look at Detection under Extreme Low-Light
Conditions with Night Object Detection Dataset [25.29013780731876]
Low light proves more difficult for machine cognition than previously thought.
We present a large-scale dataset showing dynamic scenes captured on the streets at night.
We propose to incorporate an image enhancement module into the object detection framework and two novel data augmentation techniques.
arXiv Detail & Related papers (2021-10-20T03:44:04Z) - Salient Objects in Clutter [130.63976772770368]
This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets.
This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets.
We propose a new high-quality dataset and update the previous saliency benchmark.
arXiv Detail & Related papers (2021-05-07T03:49:26Z) - Lighting the Darkness in the Deep Learning Era [118.35081853500411]
Low-light image enhancement (LLIE) aims at improving the perception or interpretability of an image captured in an environment with poor illumination.
Recent advances in this area are dominated by deep learning-based solutions.
We provide a comprehensive survey to cover various aspects ranging from algorithm taxonomy to unsolved open issues.
arXiv Detail & Related papers (2021-04-21T19:12:19Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.