TLD: A Vehicle Tail Light signal Dataset and Benchmark
- URL: http://arxiv.org/abs/2409.02508v1
- Date: Wed, 4 Sep 2024 08:08:21 GMT
- Title: TLD: A Vehicle Tail Light signal Dataset and Benchmark
- Authors: Jinhao Chai, Shiyi Mu, Shugong Xu,
- Abstract summary: This dataset consists of 152k labeled image frames sampled at a rate of 2 Hz, along with 1.5 million unlabeled frames interspersed throughout.
We have developed a two-stage vehicle light detection model consisting of two primary modules: a vehicle detector and a taillight classifier.
Our method shows exceptional performance on our dataset, establishing a benchmark for vehicle taillight detection.
- Score: 11.892883491115656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding other drivers' intentions is crucial for safe driving. The role of taillights in conveying these intentions is underemphasized in current autonomous driving systems. Accurately identifying taillight signals is essential for predicting vehicle behavior and preventing collisions. Open-source taillight datasets are scarce, often small and inconsistently annotated. To address this gap, we introduce a new large-scale taillight dataset called TLD. Sourced globally, our dataset covers diverse traffic scenarios. To our knowledge, TLD is the first dataset to separately annotate brake lights and turn signals in real driving scenarios. We collected 17.78 hours of driving videos from the internet. This dataset consists of 152k labeled image frames sampled at a rate of 2 Hz, along with 1.5 million unlabeled frames interspersed throughout. Additionally, we have developed a two-stage vehicle light detection model consisting of two primary modules: a vehicle detector and a taillight classifier. Initially, YOLOv10 and DeepSORT captured consecutive vehicle images over time. Subsequently, the two classifiers work simultaneously to determine the states of the brake lights and turn signals. A post-processing procedure is then used to eliminate noise caused by misidentifications and provide the taillight states of the vehicle within a given time frame. Our method shows exceptional performance on our dataset, establishing a benchmark for vehicle taillight detection. The dataset is available at https://huggingface.co/datasets/ChaiJohn/TLD/tree/main
Related papers
- Nighttime Driver Behavior Prediction Using Taillight Signal Recognition
via CNN-SVM Classifier [2.44755919161855]
This paper aims to enhance the ability to predict nighttime driving behavior by identifying taillights of both human-driven and autonomous vehicles.
The proposed model incorporates a customized detector designed to accurately detect front-vehicle taillights on the road.
To address the limited nighttime data, a unique pixel-wise image processing technique is implemented to convert daytime images into realistic night images.
arXiv Detail & Related papers (2023-10-25T15:23:33Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - LiDAR View Synthesis for Robust Vehicle Navigation Without Expert Labels [50.40632021583213]
We propose synthesizing additional LiDAR point clouds from novel viewpoints without physically driving at dangerous positions.
We train a deep learning model, which takes a LiDAR scan as input and predicts the future trajectory as output.
A waypoint controller is then applied to this predicted trajectory to determine the throttle and steering labels of the ego-vehicle.
arXiv Detail & Related papers (2023-08-02T20:46:43Z) - Robust Detection, Association, and Localization of Vehicle Lights: A
Context-Based Cascaded CNN Approach and Evaluations [0.0]
We present a method for detecting a vehicle light given an upstream vehicle detection and approximation of a visible light's center.
We achieve an average distance error from the ground truth corner of 4.77 pixels, about 16.33% of the size of the vehicle light on average.
We propose that this model can be integrated into a pipeline to make a fully-formed vehicle light detection network.
arXiv Detail & Related papers (2023-07-27T01:20:47Z) - DualCam: A Novel Benchmark Dataset for Fine-grained Real-time Traffic
Light Detection [0.7130302992490973]
We introduce a novel benchmark traffic light dataset captured using a synchronized pair of narrow-angle and wide-angle cameras.
The dataset includes images of resolution 1920$times$1080 covering 10 different classes.
Results show that our technique can strike a balance between speed and accuracy, compared to the conventional approach of using a single camera frame.
arXiv Detail & Related papers (2022-09-03T08:02:55Z) - Ithaca365: Dataset and Driving Perception under Repeated and Challenging
Weather Conditions [0.0]
We present a new dataset to enable robust autonomous driving via a novel data collection process.
The dataset includes images and point clouds from cameras and LiDAR sensors, along with high-precision GPS/INS.
We demonstrate the uniqueness of this dataset by analyzing the performance of baselines in amodal segmentation of road and objects.
arXiv Detail & Related papers (2022-08-01T22:55:32Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - CODA: A Real-World Road Corner Case Dataset for Object Detection in
Autonomous Driving [117.87070488537334]
We introduce a challenging dataset named CODA that exposes this critical problem of vision-based detectors.
The performance of standard object detectors trained on large-scale autonomous driving datasets significantly drops to no more than 12.8% in mAR.
We experiment with the state-of-the-art open-world object detector and find that it also fails to reliably identify the novel objects in CODA.
arXiv Detail & Related papers (2022-03-15T08:32:56Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - Provident Vehicle Detection at Night: The PVDN Dataset [2.8730465903425877]
We present a novel dataset containing 59746 grayscale annotated images out of 346 different scenes in a rural environment at night.
In these images, all oncoming vehicles, their corresponding light objects (e.g., headlamps), and their respective light reflections (e.g., light reflections on guardrails) are labeled.
With that, we are providing the first open-source dataset with comprehensive ground truth data to enable research into new methods of detecting oncoming vehicles.
arXiv Detail & Related papers (2020-12-31T00:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.