Road obstacles positional and dynamic features extraction combining
object detection, stereo disparity maps and optical flow data
- URL: http://arxiv.org/abs/2006.14011v1
- Date: Wed, 24 Jun 2020 19:29:06 GMT
- Title: Road obstacles positional and dynamic features extraction combining
object detection, stereo disparity maps and optical flow data
- Authors: Thiago Rateke and Aldo von Wangenheim
- Abstract summary: It is important that a visual perception system for navigation purposes identifies obstacles.
We present an approach for the identification of obstacles and extraction of class, position, depth and motion information.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: One of the most relevant tasks in an intelligent vehicle navigation system is
the detection of obstacles. It is important that a visual perception system for
navigation purposes identifies obstacles, and it is also important that this
system can extract essential information that may influence the vehicle's
behavior, whether it will be generating an alert for a human driver or guide an
autonomous vehicle in order to be able to make its driving decisions. In this
paper we present an approach for the identification of obstacles and extraction
of class, position, depth and motion information from these objects that
employs data gained exclusively from passive vision. We performed our
experiments on two different data-sets and the results obtained shown a good
efficacy from the use of depth and motion patterns to assess the obstacles'
potential threat status.
Related papers
- OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - DRUformer: Enhancing the driving scene Important object detection with
driving relationship self-understanding [50.81809690183755]
Traffic accidents frequently lead to fatal injuries, contributing to over 50 million deaths until 2023.
Previous research primarily assessed the importance of individual participants, treating them as independent entities.
We introduce Driving scene Relationship self-Understanding transformer (DRUformer) to enhance the important object detection task.
arXiv Detail & Related papers (2023-11-11T07:26:47Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Perspective Aware Road Obstacle Detection [104.57322421897769]
We show that road obstacle detection techniques ignore the fact that, in practice, the apparent size of the obstacles decreases as their distance to the vehicle increases.
We leverage this by computing a scale map encoding the apparent size of a hypothetical object at every image location.
We then leverage this perspective map to generate training data by injecting onto the road synthetic objects whose size corresponds to the perspective foreshortening.
arXiv Detail & Related papers (2022-10-04T17:48:42Z) - 3D Object Detection for Autonomous Driving: A Comprehensive Survey [48.30753402458884]
3D object detection, which intelligently predicts the locations, sizes, and categories of the critical 3D objects near an autonomous vehicle, is an important part of a perception system.
This paper reviews the advances in 3D object detection for autonomous driving.
arXiv Detail & Related papers (2022-06-19T19:43:11Z) - Conquering Ghosts: Relation Learning for Information Reliability
Representation and End-to-End Robust Navigation [0.0]
Environmental disturbances are inevitable in real self-driving applications.
One of the main issue is the false positive detection, i.e., the ghost object which is not real existed or occurs in the wrong position (such as a non-existent vehicle)
Traditional navigation methods tend to avoid every detected objects for safety.
A potential solution is to detect the ghost through relation learning among the whole scenario and develop an integrated end-to-end navigation system.
arXiv Detail & Related papers (2022-03-14T14:11:12Z) - Object Detection in Autonomous Vehicles: Status and Open Challenges [4.226118870861363]
Object detection is a computer vision task that has become an integral part of many consumer applications today.
Deep learning-based object detectors play a vital role in finding and localizing these objects in real-time.
This article discusses the state-of-the-art in object detectors and open challenges for their integration into autonomous vehicles.
arXiv Detail & Related papers (2022-01-19T16:45:16Z) - Dynamic and Static Object Detection Considering Fusion Regions and
Point-wise Features [7.41540085468436]
This paper proposes a new approach to detect static and dynamic objects in front of an autonomous vehicle.
Our approach can also get other characteristics from the objects detected, like their position, velocity, and heading.
To demonstrate our proposal's performance, we asses it through a benchmark dataset and real-world data obtained from an autonomous platform.
arXiv Detail & Related papers (2021-07-27T09:42:18Z) - VATLD: A Visual Analytics System to Assess, Understand and Improve
Traffic Light Detection [15.36267013724161]
We propose a visual analytics system, VATLD, to assess, understand, and improve the accuracy and robustness of traffic light detectors in autonomous driving applications.
The disentangled representation learning extracts data semantics to augment human cognition with human-friendly visual summarization.
We also demonstrate the effectiveness of various performance improvement strategies with our visual analytics system, VATLD, and illustrate some practical implications for safety-critical applications in autonomous driving.
arXiv Detail & Related papers (2020-09-27T22:39:00Z) - Training-free Monocular 3D Event Detection System for Traffic
Surveillance [93.65240041833319]
Existing event detection systems are mostly learning-based and have achieved convincing performance when a large amount of training data is available.
In real-world scenarios, collecting sufficient labeled training data is expensive and sometimes impossible.
We propose a training-free monocular 3D event detection system for traffic surveillance.
arXiv Detail & Related papers (2020-02-01T04:42:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.