Patterns of Vehicle Lights: Addressing Complexities in Curation and
Annotation of Camera-Based Vehicle Light Datasets and Metrics
- URL: http://arxiv.org/abs/2307.14521v1
- Date: Wed, 26 Jul 2023 21:48:14 GMT
- Title: Patterns of Vehicle Lights: Addressing Complexities in Curation and
Annotation of Camera-Based Vehicle Light Datasets and Metrics
- Authors: Ross Greer, Akshay Gopalkrishnan, Maitrayee Keskar, Mohan Trivedi
- Abstract summary: This paper explores the representation of vehicle lights in computer vision and its implications for various tasks in the field of autonomous driving.
Three important tasks in autonomous driving that can benefit from vehicle light detection are identified.
The challenges of collecting and annotating large datasets for training data-driven models are also addressed.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores the representation of vehicle lights in computer vision
and its implications for various tasks in the field of autonomous driving.
Different specifications for representing vehicle lights, including bounding
boxes, center points, corner points, and segmentation masks, are discussed in
terms of their strengths and weaknesses. Three important tasks in autonomous
driving that can benefit from vehicle light detection are identified: nighttime
vehicle detection, 3D vehicle orientation estimation, and dynamic trajectory
cues. Each task may require a different representation of the light. The
challenges of collecting and annotating large datasets for training data-driven
models are also addressed, leading to introduction of the LISA Vehicle Lights
Dataset and associated Light Visibility Model, which provides light annotations
specifically designed for downstream applications in vehicle detection, intent
and trajectory prediction, and safe path planning. A comparison of existing
vehicle light datasets is provided, highlighting the unique features and
limitations of each dataset. Overall, this paper provides insights into the
representation of vehicle lights and the importance of accurate annotations for
training effective detection models in autonomous driving applications. Our
dataset and model are made available at
https://cvrr.ucsd.edu/vehicle-lights-dataset
Related papers
- Accurate Automatic 3D Annotation of Traffic Lights and Signs for Autonomous Driving [0.0]
3D detection of traffic management objects, such as traffic lights and road signs, is vital for self-driving cars.
This paper introduces a novel method for automatically generating 3D bounding box annotations for traffic lights and signs, effective up to a range of 200 meters.
arXiv Detail & Related papers (2024-09-19T09:50:03Z) - SKoPe3D: A Synthetic Dataset for Vehicle Keypoint Perception in 3D from
Traffic Monitoring Cameras [26.457695296042903]
We propose SKoPe3D, a unique synthetic vehicle keypoint dataset from a roadside perspective.
SKoPe3D contains over 150k vehicle instances and 4.9 million keypoints.
Our experiments highlight the dataset's applicability and the potential for knowledge transfer between synthetic and real-world data.
arXiv Detail & Related papers (2023-09-04T02:57:30Z) - Robust Detection, Association, and Localization of Vehicle Lights: A
Context-Based Cascaded CNN Approach and Evaluations [0.0]
We present a method for detecting a vehicle light given an upstream vehicle detection and approximation of a visible light's center.
We achieve an average distance error from the ground truth corner of 4.77 pixels, about 16.33% of the size of the vehicle light on average.
We propose that this model can be integrated into a pipeline to make a fully-formed vehicle light detection network.
arXiv Detail & Related papers (2023-07-27T01:20:47Z) - Robust Traffic Light Detection Using Salience-Sensitive Loss:
Computational Framework and Evaluations [0.3061098887924466]
This paper proposes a traffic light detection model which focuses on defining salient lights as the lights that affect the driver's future decisions.
We then use this salience property to construct the LAVA Salient Lights dataset, the first US traffic light dataset with an annotated salience property.
We train a Deformable DETR object detection transformer model using Salience-Sensitive Focal Loss to emphasize stronger performance on salient traffic lights.
arXiv Detail & Related papers (2023-05-08T07:22:15Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Real-Time And Robust 3D Object Detection with Roadside LiDARs [20.10416681832639]
We design a 3D object detection model that can detect traffic participants in roadside LiDARs in real-time.
Our model uses an existing 3D detector as a baseline and improves its accuracy.
We make a significant contribution with our LiDAR-based 3D detector that can be used for smart city applications.
arXiv Detail & Related papers (2022-07-11T21:33:42Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.