aUToLights: A Robust Multi-Camera Traffic Light Detection and Tracking
System
- URL: http://arxiv.org/abs/2305.08673v2
- Date: Mon, 4 Sep 2023 18:32:25 GMT
- Title: aUToLights: A Robust Multi-Camera Traffic Light Detection and Tracking
System
- Authors: Sean Wu and Nicole Amenta and Jiachen Zhou and Sandro Papais and
Jonathan Kelly
- Abstract summary: We describe our recently-redesigned traffic light perception system for autonomous vehicles like the University of Toronto's self-driving car, Artemis.
We deploy the YOLOv5 detector for bounding box regression and traffic light classification across multiple cameras and fuse the observations.
Our results show superior performance in challenging real-world scenarios compared to single-frame, single-camera object detection.
- Score: 6.191246748708665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Following four successful years in the SAE AutoDrive Challenge Series I, the
University of Toronto is participating in the Series II competition to develop
a Level 4 autonomous passenger vehicle capable of handling various urban
driving scenarios by 2025. Accurate detection of traffic lights and correct
identification of their states is essential for safe autonomous operation in
cities. Herein, we describe our recently-redesigned traffic light perception
system for autonomous vehicles like the University of Toronto's self-driving
car, Artemis. Similar to most traffic light perception systems, we rely
primarily on camera-based object detectors. We deploy the YOLOv5 detector for
bounding box regression and traffic light classification across multiple
cameras and fuse the observations. To improve robustness, we incorporate priors
from high-definition semantic maps and perform state filtering using hidden
Markov models. We demonstrate a multi-camera, real time-capable traffic light
perception pipeline that handles complex situations including multiple visible
intersections, traffic light variations, temporary occlusion, and flashing
light states. To validate our system, we collected and annotated a varied
dataset incorporating flashing states and a range of occlusion types. Our
results show superior performance in challenging real-world scenarios compared
to single-frame, single-camera object detection.
Related papers
- City-Scale Multi-Camera Vehicle Tracking System with Improved Self-Supervised Camera Link Model [0.0]
This article introduces an innovative multi-camera vehicle tracking system that utilizes a self-supervised camera link model.
The proposed method achieves a new state-of-the-art among automatic camera-link based methods in CityFlow V2 benchmarks with 61.07% IDF1 Score.
arXiv Detail & Related papers (2024-05-18T17:28:35Z) - A Real-Time Wrong-Way Vehicle Detection Based on YOLO and Centroid
Tracking [0.0]
Wrong-way driving is one of the main causes of road accidents and traffic jam all over the world.
In this paper, we propose an automatic wrong-way vehicle detection system from on-road surveillance camera footage.
arXiv Detail & Related papers (2022-10-19T00:53:28Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - SurroundDepth: Entangling Surrounding Views for Self-Supervised
Multi-Camera Depth Estimation [101.55622133406446]
We propose a SurroundDepth method to incorporate the information from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views and propose a cross-view transformer to effectively fuse the information from multiple views.
In experiments, our method achieves the state-of-the-art performance on the challenging multi-camera depth estimation datasets.
arXiv Detail & Related papers (2022-04-07T17:58:47Z) - Traffic-Net: 3D Traffic Monitoring Using a Single Camera [1.1602089225841632]
We provide a practical platform for real-time traffic monitoring using a single CCTV traffic camera.
We adapt a custom YOLOv5 deep neural network model for vehicle/pedestrian detection and an enhanced SORT tracking algorithm.
We also develop a hierarchical traffic modelling solution based on short- and long-term temporal video data stream.
arXiv Detail & Related papers (2021-09-19T16:59:01Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - DSEC: A Stereo Event Camera Dataset for Driving Scenarios [55.79329250951028]
This work presents the first high-resolution, large-scale stereo dataset with event cameras.
The dataset contains 53 sequences collected by driving in a variety of illumination conditions.
It provides ground truth disparity for the development and evaluation of event-based stereo algorithms.
arXiv Detail & Related papers (2021-03-10T12:10:33Z) - Towards Autonomous Driving: a Multi-Modal 360$^{\circ}$ Perception
Proposal [87.11988786121447]
This paper presents a framework for 3D object detection and tracking for autonomous vehicles.
The solution, based on a novel sensor fusion configuration, provides accurate and reliable road environment detection.
A variety of tests of the system, deployed in an autonomous vehicle, have successfully assessed the suitability of the proposed perception stack.
arXiv Detail & Related papers (2020-08-21T20:36:21Z) - DAWN: Vehicle Detection in Adverse Weather Nature Dataset [4.09920839425892]
We present a new dataset consisting of real-world images collected under various adverse weather conditions called DAWN.
The dataset comprises a collection of 1000 images from real-traffic environments, which are divided into four sets of weather conditions: fog, snow, rain and sandstorms.
This data helps interpreting effects caused by the adverse weather conditions on the performance of vehicle detection systems.
arXiv Detail & Related papers (2020-08-12T15:48:49Z) - Improved YOLOv3 Object Classification in Intelligent Transportation
System [29.002873450422083]
An algorithm based on YOLOv3 is proposed to realize the detection and classification of vehicles, drivers, and people on the highway.
The model has a good performance and is robust to road blocking, different attitudes, and extreme lighting.
arXiv Detail & Related papers (2020-04-08T11:45:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.