Provident Vehicle Detection at Night: The PVDN Dataset
- URL: http://arxiv.org/abs/2012.15376v2
- Date: Sat, 23 Jan 2021 22:00:48 GMT
- Title: Provident Vehicle Detection at Night: The PVDN Dataset
- Authors: Lars Ohnemus and Lukas Ewecker and Ebubekir Asan and Stefan Roos and
Simon Isele and Jakob Ketterer and Leopold M\"uller and Sascha Saralajew
- Abstract summary: We present a novel dataset containing 59746 grayscale annotated images out of 346 different scenes in a rural environment at night.
In these images, all oncoming vehicles, their corresponding light objects (e.g., headlamps), and their respective light reflections (e.g., light reflections on guardrails) are labeled.
With that, we are providing the first open-source dataset with comprehensive ground truth data to enable research into new methods of detecting oncoming vehicles.
- Score: 2.8730465903425877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For advanced driver assistance systems, it is crucial to have information
about oncoming vehicles as early as possible. At night, this task is especially
difficult due to poor lighting conditions. For that, during nighttime, every
vehicle uses headlamps to improve sight and therefore ensure safe driving. As
humans, we intuitively assume oncoming vehicles before the vehicles are
actually physically visible by detecting light reflections caused by their
headlamps. In this paper, we present a novel dataset containing 59746 annotated
grayscale images out of 346 different scenes in a rural environment at night.
In these images, all oncoming vehicles, their corresponding light objects
(e.g., headlamps), and their respective light reflections (e.g., light
reflections on guardrails) are labeled. This is accompanied by an in-depth
analysis of the dataset characteristics. With that, we are providing the first
open-source dataset with comprehensive ground truth data to enable research
into new methods of detecting oncoming vehicles based on the light reflections
they cause, long before they are directly visible. We consider this as an
essential step to further close the performance gap between current advanced
driver assistance systems and human behavior.
Related papers
- NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Robust Detection, Association, and Localization of Vehicle Lights: A
Context-Based Cascaded CNN Approach and Evaluations [0.0]
We present a method for detecting a vehicle light given an upstream vehicle detection and approximation of a visible light's center.
We achieve an average distance error from the ground truth corner of 4.77 pixels, about 16.33% of the size of the vehicle light on average.
We propose that this model can be integrated into a pipeline to make a fully-formed vehicle light detection network.
arXiv Detail & Related papers (2023-07-27T01:20:47Z) - Patterns of Vehicle Lights: Addressing Complexities in Curation and
Annotation of Camera-Based Vehicle Light Datasets and Metrics [0.0]
This paper explores the representation of vehicle lights in computer vision and its implications for various tasks in the field of autonomous driving.
Three important tasks in autonomous driving that can benefit from vehicle light detection are identified.
The challenges of collecting and annotating large datasets for training data-driven models are also addressed.
arXiv Detail & Related papers (2023-07-26T21:48:14Z) - Robust Traffic Light Detection Using Salience-Sensitive Loss:
Computational Framework and Evaluations [0.3061098887924466]
This paper proposes a traffic light detection model which focuses on defining salient lights as the lights that affect the driver's future decisions.
We then use this salience property to construct the LAVA Salient Lights dataset, the first US traffic light dataset with an annotated salience property.
We train a Deformable DETR object detection transformer model using Salience-Sensitive Focal Loss to emphasize stronger performance on salient traffic lights.
arXiv Detail & Related papers (2023-05-08T07:22:15Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Modelling and Detection of Driver's Fatigue using Ontology [60.090278944561184]
Road accidents are the eight leading cause of death all over the world.
Various factors cause driver's fatigue.
Ontological knowledge and rules for driver fatigue detection are to be integrated into an intelligent system.
arXiv Detail & Related papers (2022-08-31T08:42:28Z) - Combining Visual Saliency Methods and Sparse Keypoint Annotations to
Providently Detect Vehicles at Night [2.0299248281970956]
We explore the potential saliency-based approaches to create different object representations based on the visual saliency and sparse keypoint annotations.
We show that this approach allows for an automated derivation of different object representations.
We provide further powerful tools and methods to study the problem of detecting vehicles at night before they are actually visible.
arXiv Detail & Related papers (2022-04-25T09:56:34Z) - Hindsight is 20/20: Leveraging Past Traversals to Aid 3D Perception [59.2014692323323]
Small, far-away, or highly occluded objects are particularly challenging because there is limited information in the LiDAR point clouds for detecting them.
We propose a novel, end-to-end trainable Hindsight framework to extract contextual information from past data.
We show that this framework is compatible with most modern 3D detection architectures and can substantially improve their average precision on multiple autonomous driving datasets.
arXiv Detail & Related papers (2022-03-22T00:58:27Z) - A Dataset for Provident Vehicle Detection at Night [3.1969855247377827]
We study the problem of how to map this intuitive human behavior to computer vision algorithms to detect oncoming vehicles at night.
We present an extensive open-source dataset containing 59746 annotated grayscale images out of 346 different scenes in a rural environment at night.
We discuss the characteristics of the dataset and the challenges in objectively describing visual cues such as light reflections.
arXiv Detail & Related papers (2021-05-27T15:31:33Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z) - Drone-based RGB-Infrared Cross-Modality Vehicle Detection via
Uncertainty-Aware Learning [59.19469551774703]
Drone-based vehicle detection aims at finding the vehicle locations and categories in an aerial image.
We construct a large-scale drone-based RGB-Infrared vehicle detection dataset, termed DroneVehicle.
Our DroneVehicle collects 28, 439 RGB-Infrared image pairs, covering urban roads, residential areas, parking lots, and other scenarios from day to night.
arXiv Detail & Related papers (2020-03-05T05:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.