DualCam: A Novel Benchmark Dataset for Fine-grained Real-time Traffic
Light Detection
- URL: http://arxiv.org/abs/2209.01357v1
- Date: Sat, 3 Sep 2022 08:02:55 GMT
- Title: DualCam: A Novel Benchmark Dataset for Fine-grained Real-time Traffic
Light Detection
- Authors: Harindu Jayarathne, Tharindu Samarakoon, Hasara Koralege, Asitha
Divisekara, Ranga Rodrigo and Peshala Jayasekara
- Abstract summary: We introduce a novel benchmark traffic light dataset captured using a synchronized pair of narrow-angle and wide-angle cameras.
The dataset includes images of resolution 1920$times$1080 covering 10 different classes.
Results show that our technique can strike a balance between speed and accuracy, compared to the conventional approach of using a single camera frame.
- Score: 0.7130302992490973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic light detection is essential for self-driving cars to navigate safely
in urban areas. Publicly available traffic light datasets are inadequate for
the development of algorithms for detecting distant traffic lights that provide
important navigation information. We introduce a novel benchmark traffic light
dataset captured using a synchronized pair of narrow-angle and wide-angle
cameras covering urban and semi-urban roads. We provide 1032 images for
training and 813 synchronized image pairs for testing. Additionally, we provide
synchronized video pairs for qualitative analysis. The dataset includes images
of resolution 1920$\times$1080 covering 10 different classes. Furthermore, we
propose a post-processing algorithm for combining outputs from the two cameras.
Results show that our technique can strike a balance between speed and
accuracy, compared to the conventional approach of using a single camera frame.
Related papers
- XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
The dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters.
We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - Driver Attention Tracking and Analysis [17.536550982093143]
We propose a novel method to estimate a driver's points-of-gaze using a pair of ordinary cameras mounted on the windshield and dashboard of a car.
This is a challenging problem due to the dynamics of traffic environments with 3D scenes of unknown depths.
We develop a novel convolutional network that simultaneously analyzes the image of the scene and the image of the driver's face.
arXiv Detail & Related papers (2024-04-10T16:01:37Z) - The Interstate-24 3D Dataset: a new benchmark for 3D multi-camera
vehicle tracking [4.799822253865053]
This work presents a novel video dataset recorded from overlapping highway traffic cameras along an urban interstate, enabling multi-camera 3D object tracking in a traffic monitoring context.
Data is released from 3 scenes containing video from at least 16 cameras each, totaling 57 minutes in length.
877,000 3D bounding boxes and corresponding object tracklets are fully and accurately annotated for each camera field of view and are combined into a spatially and temporally continuous set of vehicle trajectories for each scene.
arXiv Detail & Related papers (2023-08-28T18:43:33Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - Towards view-invariant vehicle speed detection from driving simulator
images [0.31498833540989407]
We address the question of whether complex 3D-CNN architectures are capable of implicitly learning view-invariant speeds using a single model.
The results are very promising as they show that a single model with data from multiple views reports even better accuracy than camera-specific models.
arXiv Detail & Related papers (2022-06-01T09:14:45Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Towards Real-time Traffic Sign and Traffic Light Detection on Embedded
Systems [0.6143225301480709]
We propose a simple deep learning based end-to-end detection framework to tackle challenges inherent to traffic sign and traffic light detection.
The overall system achieves a high inference speed of 63 frames per second, demonstrating the capability of our system to perform in real-time.
CeyRo is the first ever large-scale traffic sign and traffic light detection dataset for the Sri Lankan context.
arXiv Detail & Related papers (2022-05-05T03:46:19Z) - Cross-Camera Trajectories Help Person Retrieval in a Camera Network [124.65912458467643]
Existing methods often rely on purely visual matching or consider temporal constraints but ignore the spatial information of the camera network.
We propose a pedestrian retrieval framework based on cross-camera generation, which integrates both temporal and spatial information.
To verify the effectiveness of our method, we construct the first cross-camera pedestrian trajectory dataset.
arXiv Detail & Related papers (2022-04-27T13:10:48Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - Data-driven vehicle speed detection from synthetic driving simulator
images [0.440401067183266]
We explore the use of synthetic images generated from a driving simulator to address vehicle speed detection.
We generate thousands of images with variability corresponding to multiple speeds, different vehicle types and colors, and lighting and weather conditions.
Two different approaches to map the sequence of images to an output speed (regression) are studied, including CNN-GRU and 3D-CNN.
arXiv Detail & Related papers (2021-04-20T11:26:13Z) - Deep traffic light detection by overlaying synthetic context on
arbitrary natural images [49.592798832978296]
We propose a method to generate artificial traffic-related training data for deep traffic light detectors.
This data is generated using basic non-realistic computer graphics to blend fake traffic scenes on top of arbitrary image backgrounds.
It also tackles the intrinsic data imbalance problem in traffic light datasets, caused mainly by the low amount of samples of the yellow state.
arXiv Detail & Related papers (2020-11-07T19:57:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.