LIGHTS: LIGHT Specularity Dataset for specular detection in Multi-view
- URL: http://arxiv.org/abs/2101.10772v1
- Date: Tue, 26 Jan 2021 13:26:49 GMT
- Title: LIGHTS: LIGHT Specularity Dataset for specular detection in Multi-view
- Authors: Mohamed Dahy Elkhouly, Theodore Tsesmelis, Alessio Del Bue, Stuart
James
- Abstract summary: We propose a novel physically-based rendered LIGHT Specularity (SLIGHT) dataset for the evaluation of the specular highlight detection task.
Our dataset consists of 18 high quality architectural scenes, where each scene is rendered with multiple views.
In total we have 2,603 views with an average of 145 views per scene.
- Score: 12.612981566441908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Specular highlights are commonplace in images, however, methods for detecting
them and in turn removing the phenomenon are particularly challenging. A reason
for this, is due to the difficulty of creating a dataset for training or
evaluation, as in the real-world we lack the necessary control over the
environment. Therefore, we propose a novel physically-based rendered LIGHT
Specularity (LIGHTS) Dataset for the evaluation of the specular highlight
detection task. Our dataset consists of 18 high quality architectural scenes,
where each scene is rendered with multiple views. In total we have 2,603 views
with an average of 145 views per scene. Additionally we propose a simple
aggregation based method for specular highlight detection that outperforms
prior work by 3.6% in two orders of magnitude less time on our dataset.
Related papers
- HUE Dataset: High-Resolution Event and Frame Sequences for Low-Light Vision [16.432164340779266]
We introduce the HUE dataset, a collection of high-resolution event and frame sequences captured in low-light conditions.
Our dataset includes 106 sequences, encompassing indoor, cityscape, twilight, night, driving, and controlled scenarios.
We employ both qualitative and quantitative evaluations to assess state-of-the-art low-light enhancement and event-based image reconstruction methods.
arXiv Detail & Related papers (2024-10-24T21:15:15Z) - BVI-RLV: A Fully Registered Dataset and Benchmarks for Low-Light Video Enhancement [56.97766265018334]
This paper introduces a low-light video dataset, consisting of 40 scenes with various motion scenarios under two distinct low-lighting conditions.
We provide fully registered ground truth data captured in normal light using a programmable motorized dolly and refine it via an image-based approach for pixel-wise frame alignment across different light levels.
Our experimental results demonstrate the significance of fully registered video pairs for low-light video enhancement (LLVE) and the comprehensive evaluation shows that the models trained with our dataset outperform those trained with the existing datasets.
arXiv Detail & Related papers (2024-07-03T22:41:49Z) - MonoTDP: Twin Depth Perception for Monocular 3D Object Detection in
Adverse Scenes [49.21187418886508]
This paper proposes a monocular 3D detection model designed to perceive twin depth in adverse scenes, termed MonoTDP.
We first introduce an adaptive learning strategy to aid the model in handling uncontrollable weather conditions, significantly resisting degradation caused by various degrading factors.
Then, to address the depth/content loss in adverse regions, we propose a novel twin depth perception module that simultaneously estimates scene and object depth.
arXiv Detail & Related papers (2023-05-18T13:42:02Z) - Booster: a Benchmark for Depth from Images of Specular and Transparent
Surfaces [49.44971010149331]
We propose a novel dataset that includes accurate and dense ground-truth labels at high resolution.
Our acquisition pipeline leverages a novel deep space-time stereo framework.
The dataset is composed of 606 samples collected in 85 different scenes.
arXiv Detail & Related papers (2023-01-19T18:59:28Z) - DarkVision: A Benchmark for Low-light Image/Video Perception [44.94878263751042]
We contribute the first multi-illuminance, multi-camera, and low-light dataset, named DarkVision, for both image enhancement and object detection.
The dataset consists of bright-dark pairs of 900 static scenes with objects from 15 categories, and 32 dynamic scenes with 4-category objects.
For each scene, images/videos were captured at 5 illuminance levels using three cameras of different grades, and average photons can be reliably estimated.
arXiv Detail & Related papers (2023-01-16T05:55:59Z) - A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and
Ground Truth [61.90504318229845]
This paper introduces the first paired real image benchmark dataset with hazy and haze-free images, and in-situ haze density measurements.
This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene.
A subset of this dataset has been used for the Object Detection in Haze Track of CVPR UG2 2022 challenge.
arXiv Detail & Related papers (2022-06-13T19:14:06Z) - StandardSim: A Synthetic Dataset For Retail Environments [0.07874708385247352]
We present a large-scale synthetic dataset featuring annotations for semantic segmentation, instance segmentation, depth estimation, and object detection.
Our dataset provides multiple views per scene, enabling multi-view representation learning.
We benchmark widely-used models for segmentation and depth estimation on our dataset, show that our test set constitutes a difficult benchmark compared to current smaller-scale datasets.
arXiv Detail & Related papers (2022-02-04T22:28:35Z) - NOD: Taking a Closer Look at Detection under Extreme Low-Light
Conditions with Night Object Detection Dataset [25.29013780731876]
Low light proves more difficult for machine cognition than previously thought.
We present a large-scale dataset showing dynamic scenes captured on the streets at night.
We propose to incorporate an image enhancement module into the object detection framework and two novel data augmentation techniques.
arXiv Detail & Related papers (2021-10-20T03:44:04Z) - Salient Objects in Clutter [130.63976772770368]
This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets.
This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets.
We propose a new high-quality dataset and update the previous saliency benchmark.
arXiv Detail & Related papers (2021-05-07T03:49:26Z) - GraspNet: A Large-Scale Clustered and Densely Annotated Dataset for
Object Grasping [49.777649953381676]
We contribute a large-scale grasp pose detection dataset with a unified evaluation system.
Our dataset contains 87,040 RGBD images with over 370 million grasp poses.
arXiv Detail & Related papers (2019-12-31T18:15:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.