Robust Traffic Light Detection Using Salience-Sensitive Loss:
Computational Framework and Evaluations
- URL: http://arxiv.org/abs/2305.04516v1
- Date: Mon, 8 May 2023 07:22:15 GMT
- Title: Robust Traffic Light Detection Using Salience-Sensitive Loss:
Computational Framework and Evaluations
- Authors: Ross Greer, Akshay Gopalkrishnan, Jacob Landgren, Lulua Rakla, Anish
Gopalan, Mohan Trivedi
- Abstract summary: This paper proposes a traffic light detection model which focuses on defining salient lights as the lights that affect the driver's future decisions.
We then use this salience property to construct the LAVA Salient Lights dataset, the first US traffic light dataset with an annotated salience property.
We train a Deformable DETR object detection transformer model using Salience-Sensitive Focal Loss to emphasize stronger performance on salient traffic lights.
- Score: 0.3061098887924466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the most important tasks for ensuring safe autonomous driving systems
is accurately detecting road traffic lights and accurately determining how they
impact the driver's actions. In various real-world driving situations, a scene
may have numerous traffic lights with varying levels of relevance to the
driver, and thus, distinguishing and detecting the lights that are relevant to
the driver and influence the driver's actions is a critical safety task. This
paper proposes a traffic light detection model which focuses on this task by
first defining salient lights as the lights that affect the driver's future
decisions. We then use this salience property to construct the LAVA Salient
Lights Dataset, the first US traffic light dataset with an annotated salience
property. Subsequently, we train a Deformable DETR object detection transformer
model using Salience-Sensitive Focal Loss to emphasize stronger performance on
salient traffic lights, showing that a model trained with this loss function
has stronger recall than one trained without.
Related papers
- Towards Infusing Auxiliary Knowledge for Distracted Driver Detection [11.816566371802802]
Distracted driving is a leading cause of road accidents globally.
We propose KiD3, a novel method for distracted driver detection (DDD) by infusing auxiliary knowledge about semantic relations between entities in a scene and the structural configuration of the driver's pose.
Specifically, we construct a unified framework that integrates the scene graphs, and driver pose information with the visual cues in video frames to create a holistic representation of the driver's actions.
arXiv Detail & Related papers (2024-08-29T15:28:42Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - DRUformer: Enhancing the driving scene Important object detection with
driving relationship self-understanding [50.81809690183755]
Traffic accidents frequently lead to fatal injuries, contributing to over 50 million deaths until 2023.
Previous research primarily assessed the importance of individual participants, treating them as independent entities.
We introduce Driving scene Relationship self-Understanding transformer (DRUformer) to enhance the important object detection task.
arXiv Detail & Related papers (2023-11-11T07:26:47Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Robust Detection, Association, and Localization of Vehicle Lights: A
Context-Based Cascaded CNN Approach and Evaluations [0.0]
We present a method for detecting a vehicle light given an upstream vehicle detection and approximation of a visible light's center.
We achieve an average distance error from the ground truth corner of 4.77 pixels, about 16.33% of the size of the vehicle light on average.
We propose that this model can be integrated into a pipeline to make a fully-formed vehicle light detection network.
arXiv Detail & Related papers (2023-07-27T01:20:47Z) - Patterns of Vehicle Lights: Addressing Complexities in Curation and
Annotation of Camera-Based Vehicle Light Datasets and Metrics [0.0]
This paper explores the representation of vehicle lights in computer vision and its implications for various tasks in the field of autonomous driving.
Three important tasks in autonomous driving that can benefit from vehicle light detection are identified.
The challenges of collecting and annotating large datasets for training data-driven models are also addressed.
arXiv Detail & Related papers (2023-07-26T21:48:14Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Salient Sign Detection In Safe Autonomous Driving: AI Which Reasons Over
Full Visual Context [2.799896314754614]
Various traffic signs in a driving scene have an unequal impact on the driver's decisions.
We construct a traffic sign detection model which emphasizes performance on salient signs.
We show that a model trained with Salience-Sensitive Focal Loss outperforms a model trained without.
arXiv Detail & Related papers (2023-01-14T01:47:09Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.