Robust Detection, Association, and Localization of Vehicle Lights: A
Context-Based Cascaded CNN Approach and Evaluations
- URL: http://arxiv.org/abs/2307.14571v2
- Date: Tue, 22 Aug 2023 02:08:51 GMT
- Title: Robust Detection, Association, and Localization of Vehicle Lights: A
Context-Based Cascaded CNN Approach and Evaluations
- Authors: Akshay Gopalkrishnan, Ross Greer, Maitrayee Keskar, Mohan Trivedi
- Abstract summary: We present a method for detecting a vehicle light given an upstream vehicle detection and approximation of a visible light's center.
We achieve an average distance error from the ground truth corner of 4.77 pixels, about 16.33% of the size of the vehicle light on average.
We propose that this model can be integrated into a pipeline to make a fully-formed vehicle light detection network.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vehicle light detection, association, and localization are required for
important downstream safe autonomous driving tasks, such as predicting a
vehicle's light state to determine if the vehicle is making a lane change or
turning. Currently, many vehicle light detectors use single-stage detectors
which predict bounding boxes to identify a vehicle light, in a manner decoupled
from vehicle instances. In this paper, we present a method for detecting a
vehicle light given an upstream vehicle detection and approximation of a
visible light's center. Our method predicts four approximate corners associated
with each vehicle light. We experiment with CNN architectures, data
augmentation, and contextual preprocessing methods designed to reduce
surrounding-vehicle confusion. We achieve an average distance error from the
ground truth corner of 4.77 pixels, about 16.33% of the size of the vehicle
light on average. We train and evaluate our model on the LISA Lights Dataset,
allowing us to thoroughly evaluate our vehicle light corner detection model on
a large variety of vehicle light shapes and lighting conditions. We propose
that this model can be integrated into a pipeline with vehicle detection and
vehicle light center detection to make a fully-formed vehicle light detection
network, valuable to identifying trajectory-informative signals in driving
scenes.
Related papers
- Patterns of Vehicle Lights: Addressing Complexities in Curation and
Annotation of Camera-Based Vehicle Light Datasets and Metrics [0.0]
This paper explores the representation of vehicle lights in computer vision and its implications for various tasks in the field of autonomous driving.
Three important tasks in autonomous driving that can benefit from vehicle light detection are identified.
The challenges of collecting and annotating large datasets for training data-driven models are also addressed.
arXiv Detail & Related papers (2023-07-26T21:48:14Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Combining Visual Saliency Methods and Sparse Keypoint Annotations to
Providently Detect Vehicles at Night [2.0299248281970956]
We explore the potential saliency-based approaches to create different object representations based on the visual saliency and sparse keypoint annotations.
We show that this approach allows for an automated derivation of different object representations.
We provide further powerful tools and methods to study the problem of detecting vehicles at night before they are actually visible.
arXiv Detail & Related papers (2022-04-25T09:56:34Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Provident Vehicle Detection at Night: The PVDN Dataset [2.8730465903425877]
We present a novel dataset containing 59746 grayscale annotated images out of 346 different scenes in a rural environment at night.
In these images, all oncoming vehicles, their corresponding light objects (e.g., headlamps), and their respective light reflections (e.g., light reflections on guardrails) are labeled.
With that, we are providing the first open-source dataset with comprehensive ground truth data to enable research into new methods of detecting oncoming vehicles.
arXiv Detail & Related papers (2020-12-31T00:06:26Z) - Computer Vision based Accident Detection for Autonomous Vehicles [0.0]
We propose a novel support system for self-driving cars that detects vehicular accidents through a dashboard camera.
The framework has been tested on a custom dataset of dashcam footage and achieves a high accident detection rate while maintaining a low false alarm rate.
arXiv Detail & Related papers (2020-12-20T08:51:10Z) - Ego-motion and Surrounding Vehicle State Estimation Using a Monocular
Camera [11.29865843123467]
We propose a novel machine learning method to estimate ego-motion and surrounding vehicle state using a single monocular camera.
Our approach is based on a combination of three deep neural networks to estimate the 3D vehicle bounding box, depth, and optical flow from a sequence of images.
arXiv Detail & Related papers (2020-05-04T16:41:38Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.