GLARE: A Dataset for Traffic Sign Detection in Sun Glare
- URL: http://arxiv.org/abs/2209.08716v2
- Date: Wed, 13 Dec 2023 06:25:23 GMT
- Title: GLARE: A Dataset for Traffic Sign Detection in Sun Glare
- Authors: Nicholas Gray, Megan Moraes, Jiang Bian, Alex Wang, Allen Tian, Kurt
Wilson, Yan Huang, Haoyi Xiong, Zhishan Guo
- Abstract summary: GLARE is a collection of images with U.S.-based traffic signs under heavy visual interference by sunlight.
It provides an essential enrichment to the widely used LISA Traffic Sign dataset.
Current architectures have better detection when trained on images of traffic signs in sun glare performance.
- Score: 28.692414823901313
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-time machine learning object detection algorithms are often found within
autonomous vehicle technology and depend on quality datasets. It is essential
that these algorithms work correctly in everyday conditions as well as under
strong sun glare. Reports indicate glare is one of the two most prominent
environment-related reasons for crashes. However, existing datasets, such as
the Laboratory for Intelligent & Safe Automobiles Traffic Sign (LISA) Dataset
and the German Traffic Sign Recognition Benchmark, do not reflect the existence
of sun glare at all. This paper presents the GLARE (GLARE is available at:
https://github.com/NicholasCG/GLARE_Dataset ) traffic sign dataset: a
collection of images with U.S-based traffic signs under heavy visual
interference by sunlight. GLARE contains 2,157 images of traffic signs with sun
glare, pulled from 33 videos of dashcam footage of roads in the United States.
It provides an essential enrichment to the widely used LISA Traffic Sign
dataset. Our experimental study shows that although several state-of-the-art
baseline architectures have demonstrated good performance on traffic sign
detection in conditions without sun glare in the past, they performed poorly
when tested against GLARE (e.g., average mAP0.5:0.95 of 19.4). We also notice
that current architectures have better detection when trained on images of
traffic signs in sun glare performance (e.g., average mAP0.5:0.95 of 39.6), and
perform best when trained on a mixture of conditions (e.g., average mAP0.5:0.95
of 42.3).
Related papers
- TLD: A Vehicle Tail Light signal Dataset and Benchmark [11.892883491115656]
This dataset consists of 152k labeled image frames sampled at a rate of 2 Hz, along with 1.5 million unlabeled frames interspersed throughout.
We have developed a two-stage vehicle light detection model consisting of two primary modules: a vehicle detector and a taillight classifier.
Our method shows exceptional performance on our dataset, establishing a benchmark for vehicle taillight detection.
arXiv Detail & Related papers (2024-09-04T08:08:21Z) - Homography Guided Temporal Fusion for Road Line and Marking Segmentation [73.47092021519245]
Road lines and markings are frequently occluded in the presence of moving vehicles, shadow, and glare.
We propose a Homography Guided Fusion (HomoFusion) module to exploit temporally-adjacent video frames for complementary cues.
We show that exploiting available camera intrinsic data and ground plane assumption for cross-frame correspondence can lead to a light-weight network with significantly improved performances in speed and accuracy.
arXiv Detail & Related papers (2024-04-11T10:26:40Z) - Real-Time Traffic Sign Detection: A Case Study in a Santa Clara Suburban
Neighborhood [2.4087090457198435]
The project's primary objectives are to train the YOLOv5 model on a diverse dataset of traffic sign images and deploy the model on a suitable hardware platform.
The performance of the deployed system will be evaluated based on its accuracy in detecting traffic signs, real-time processing speed, and overall reliability.
arXiv Detail & Related papers (2023-10-14T17:52:28Z) - WEDGE: A multi-weather autonomous driving dataset built from generative
vision-language models [51.61662672912017]
We introduce WEDGE: a synthetic dataset generated with a vision-language generative model via prompting.
WEDGE consists of 3360 images in 16 extreme weather conditions manually annotated with 16513 bounding boxes.
We establish baseline performance for classification and detection with 53.87% test accuracy and 45.41 mAP.
arXiv Detail & Related papers (2023-05-12T14:42:47Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Towards Real-time Traffic Sign and Traffic Light Detection on Embedded
Systems [0.6143225301480709]
We propose a simple deep learning based end-to-end detection framework to tackle challenges inherent to traffic sign and traffic light detection.
The overall system achieves a high inference speed of 63 frames per second, demonstrating the capability of our system to perform in real-time.
CeyRo is the first ever large-scale traffic sign and traffic light detection dataset for the Sri Lankan context.
arXiv Detail & Related papers (2022-05-05T03:46:19Z) - METEOR: A Massive Dense & Heterogeneous Behavior Dataset for Autonomous
Driving [42.69638782267657]
We present a new and complex traffic dataset, METEOR, which captures traffic patterns in unstructured scenarios in India.
METEOR consists of more than 1000 one-minute video clips, over 2 million annotated frames with ego-vehicle trajectories, and more than 13 million bounding boxes for surrounding vehicles or traffic agents.
We use our novel dataset to evaluate the performance of object detection and behavior prediction algorithms.
arXiv Detail & Related papers (2021-09-16T01:01:55Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Deep traffic light detection by overlaying synthetic context on
arbitrary natural images [49.592798832978296]
We propose a method to generate artificial traffic-related training data for deep traffic light detectors.
This data is generated using basic non-realistic computer graphics to blend fake traffic scenes on top of arbitrary image backgrounds.
It also tackles the intrinsic data imbalance problem in traffic light datasets, caused mainly by the low amount of samples of the yellow state.
arXiv Detail & Related papers (2020-11-07T19:57:22Z) - 4Seasons: A Cross-Season Dataset for Multi-Weather SLAM in Autonomous
Driving [48.588254700810474]
We present a novel dataset covering seasonal and challenging perceptual conditions for autonomous driving.
Among others, it enables research on visual odometry, global place recognition, and map-based re-localization tracking.
arXiv Detail & Related papers (2020-09-14T12:31:20Z) - DAWN: Vehicle Detection in Adverse Weather Nature Dataset [4.09920839425892]
We present a new dataset consisting of real-world images collected under various adverse weather conditions called DAWN.
The dataset comprises a collection of 1000 images from real-traffic environments, which are divided into four sets of weather conditions: fog, snow, rain and sandstorms.
This data helps interpreting effects caused by the adverse weather conditions on the performance of vehicle detection systems.
arXiv Detail & Related papers (2020-08-12T15:48:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.