Combining Visual Saliency Methods and Sparse Keypoint Annotations to
Providently Detect Vehicles at Night
- URL: http://arxiv.org/abs/2204.11535v1
- Date: Mon, 25 Apr 2022 09:56:34 GMT
- Title: Combining Visual Saliency Methods and Sparse Keypoint Annotations to
Providently Detect Vehicles at Night
- Authors: Lukas Ewecker, Lars Ohnemus, Robin Schwager, Stefan Roos, Sascha
Saralajew
- Abstract summary: We explore the potential saliency-based approaches to create different object representations based on the visual saliency and sparse keypoint annotations.
We show that this approach allows for an automated derivation of different object representations.
We provide further powerful tools and methods to study the problem of detecting vehicles at night before they are actually visible.
- Score: 2.0299248281970956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Provident detection of other road users at night has the potential for
increasing road safety. For this purpose, humans intuitively use visual cues,
such as light cones and light reflections emitted by other road users to be
able to react to oncoming traffic at an early stage. This behavior can be
imitated by computer vision methods by predicting the appearance of vehicles
based on emitted light reflections caused by the vehicle's headlights. Since
current object detection algorithms are mainly based on detecting directly
visible objects annotated via bounding boxes, the detection and annotation of
light reflections without sharp boundaries is challenging. For this reason, the
extensive open-source dataset PVDN (Provident Vehicle Detection at Night) was
published, which includes traffic scenarios at night with light reflections
annotated via keypoints. In this paper, we explore the potential of
saliency-based approaches to create different object representations based on
the visual saliency and sparse keypoint annotations of the PVDN dataset. For
that, we extend the general idea of Boolean map saliency towards a
context-aware approach by taking into consideration sparse keypoint annotations
by humans. We show that this approach allows for an automated derivation of
different object representations, such as binary maps or bounding boxes so that
detection models can be trained on different annotation variants and the
problem of providently detecting vehicles at night can be tackled from
different perspectives. With that, we provide further powerful tools and
methods to study the problem of detecting vehicles at night before they are
actually visible.
Related papers
- OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Nighttime Driver Behavior Prediction Using Taillight Signal Recognition
via CNN-SVM Classifier [2.44755919161855]
This paper aims to enhance the ability to predict nighttime driving behavior by identifying taillights of both human-driven and autonomous vehicles.
The proposed model incorporates a customized detector designed to accurately detect front-vehicle taillights on the road.
To address the limited nighttime data, a unique pixel-wise image processing technique is implemented to convert daytime images into realistic night images.
arXiv Detail & Related papers (2023-10-25T15:23:33Z) - Patterns of Vehicle Lights: Addressing Complexities in Curation and
Annotation of Camera-Based Vehicle Light Datasets and Metrics [0.0]
This paper explores the representation of vehicle lights in computer vision and its implications for various tasks in the field of autonomous driving.
Three important tasks in autonomous driving that can benefit from vehicle light detection are identified.
The challenges of collecting and annotating large datasets for training data-driven models are also addressed.
arXiv Detail & Related papers (2023-07-26T21:48:14Z) - SalienDet: A Saliency-based Feature Enhancement Algorithm for Object
Detection for Autonomous Driving [160.57870373052577]
We propose a saliency-based OD algorithm (SalienDet) to detect unknown objects.
Our SalienDet utilizes a saliency-based algorithm to enhance image features for object proposal generation.
We design a dataset relabeling approach to differentiate the unknown objects from all objects in training sample set to achieve Open-World Detection.
arXiv Detail & Related papers (2023-05-11T16:19:44Z) - Once Detected, Never Lost: Surpassing Human Performance in Offline LiDAR
based 3D Object Detection [50.959453059206446]
This paper aims for high-performance offline LiDAR-based 3D object detection.
We first observe that experienced human annotators annotate objects from a track-centric perspective.
We propose a high-performance offline detector in a track-centric perspective instead of the conventional object-centric perspective.
arXiv Detail & Related papers (2023-04-24T17:59:05Z) - Provident Vehicle Detection at Night for Advanced Driver Assistance
Systems [3.7468898363447654]
We present a complete system capable of providingntly detect oncoming vehicles at nighttime based on their caused light artifacts.
We quantify the time benefit that the provident vehicle detection system provides compared to an in-production computer vision system.
arXiv Detail & Related papers (2021-07-23T15:27:17Z) - A Dataset for Provident Vehicle Detection at Night [3.1969855247377827]
We study the problem of how to map this intuitive human behavior to computer vision algorithms to detect oncoming vehicles at night.
We present an extensive open-source dataset containing 59746 annotated grayscale images out of 346 different scenes in a rural environment at night.
We discuss the characteristics of the dataset and the challenges in objectively describing visual cues such as light reflections.
arXiv Detail & Related papers (2021-05-27T15:31:33Z) - Provident Vehicle Detection at Night: The PVDN Dataset [2.8730465903425877]
We present a novel dataset containing 59746 grayscale annotated images out of 346 different scenes in a rural environment at night.
In these images, all oncoming vehicles, their corresponding light objects (e.g., headlamps), and their respective light reflections (e.g., light reflections on guardrails) are labeled.
With that, we are providing the first open-source dataset with comprehensive ground truth data to enable research into new methods of detecting oncoming vehicles.
arXiv Detail & Related papers (2020-12-31T00:06:26Z) - Perceiving Traffic from Aerial Images [86.994032967469]
We propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images.
We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone 2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
arXiv Detail & Related papers (2020-09-16T11:37:43Z) - V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and
Prediction [74.42961817119283]
We use vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles.
By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints.
arXiv Detail & Related papers (2020-08-17T17:58:26Z) - Parsing-based View-aware Embedding Network for Vehicle Re-Identification [138.11983486734576]
We propose a parsing-based view-aware embedding network (PVEN) to achieve the view-aware feature alignment and enhancement for vehicle ReID.
The experiments conducted on three datasets show that our model outperforms state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-04-10T13:06:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.