Conquering Ghosts: Relation Learning for Information Reliability
Representation and End-to-End Robust Navigation
- URL: http://arxiv.org/abs/2203.09952v1
- Date: Mon, 14 Mar 2022 14:11:12 GMT
- Title: Conquering Ghosts: Relation Learning for Information Reliability
Representation and End-to-End Robust Navigation
- Authors: Kefan Jin, Xingyao Han
- Abstract summary: Environmental disturbances are inevitable in real self-driving applications.
One of the main issue is the false positive detection, i.e., the ghost object which is not real existed or occurs in the wrong position (such as a non-existent vehicle)
Traditional navigation methods tend to avoid every detected objects for safety.
A potential solution is to detect the ghost through relation learning among the whole scenario and develop an integrated end-to-end navigation system.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Environmental disturbances, such as sensor data noises, various lighting
conditions, challenging weathers and external adversarial perturbations, are
inevitable in real self-driving applications. Existing researches and testings
have shown that they can severely influence the vehicles perception ability and
performance, one of the main issue is the false positive detection, i.e., the
ghost object which is not real existed or occurs in the wrong position (such as
a non-existent vehicle). Traditional navigation methods tend to avoid every
detected objects for safety, however, avoiding a ghost object may lead the
vehicle into a even more dangerous situation, such as a sudden break on the
highway. Considering the various disturbance types, it is difficult to address
this issue at the perceptual aspect. A potential solution is to detect the
ghost through relation learning among the whole scenario and develop an
integrated end-to-end navigation system. Our underlying logic is that the
behavior of all vehicles in the scene is influenced by their neighbors, and
normal vehicles behave in a logical way, while ghost vehicles do not. By
learning the spatio-temporal relation among surrounding vehicles, an
information reliability representation is learned for each detected vehicle and
then a robot navigation network is developed. In contrast to existing works, we
encourage the network to learn how to represent the reliability and how to
aggregate all the information with uncertainties by itself, thus increasing the
efficiency and generalizability. To the best of the authors knowledge, this
paper provides the first work on using graph relation learning to achieve
end-to-end robust navigation in the presence of ghost vehicles. Simulation
results in the CARLA platform demonstrate the feasibility and effectiveness of
the proposed method in various scenarios.
Related papers
- OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - ScaTE: A Scalable Framework for Self-Supervised Traversability
Estimation in Unstructured Environments [7.226357394861987]
In this work, we introduce a scalable framework for learning self-supervised traversability.
We train a neural network that predicts the proprioceptive experience that a vehicle would undergo from 3D point clouds.
With driving data of various vehicles gathered from simulation and the real world, we show that our framework is capable of learning the self-supervised traversability of various vehicles.
arXiv Detail & Related papers (2022-09-14T09:52:26Z) - Dynamic and Static Object Detection Considering Fusion Regions and
Point-wise Features [7.41540085468436]
This paper proposes a new approach to detect static and dynamic objects in front of an autonomous vehicle.
Our approach can also get other characteristics from the objects detected, like their position, velocity, and heading.
To demonstrate our proposal's performance, we asses it through a benchmark dataset and real-world data obtained from an autonomous platform.
arXiv Detail & Related papers (2021-07-27T09:42:18Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Studying Person-Specific Pointing and Gaze Behavior for Multimodal
Referencing of Outside Objects from a Moving Vehicle [58.720142291102135]
Hand pointing and eye gaze have been extensively investigated in automotive applications for object selection and referencing.
Existing outside-the-vehicle referencing methods focus on a static situation, whereas the situation in a moving vehicle is highly dynamic and subject to safety-critical constraints.
We investigate the specific characteristics of each modality and the interaction between them when used in the task of referencing outside objects.
arXiv Detail & Related papers (2020-09-23T14:56:19Z) - Object Detection Under Rainy Conditions for Autonomous Vehicles: A
Review of State-of-the-Art and Emerging Techniques [5.33024001730262]
This paper presents a tutorial on state-of-the-art techniques for mitigating the influence of rainy conditions on an autonomous vehicle's ability to detect objects.
Our goal includes surveying and analyzing the performance of object detection methods trained and tested using visual data captured under clear and rainy conditions.
arXiv Detail & Related papers (2020-06-30T02:05:10Z) - Road obstacles positional and dynamic features extraction combining
object detection, stereo disparity maps and optical flow data [0.0]
It is important that a visual perception system for navigation purposes identifies obstacles.
We present an approach for the identification of obstacles and extraction of class, position, depth and motion information.
arXiv Detail & Related papers (2020-06-24T19:29:06Z) - Probabilistic End-to-End Vehicle Navigation in Complex Dynamic
Environments with Multimodal Sensor Fusion [16.018962965273495]
All-day and all-weather navigation is a critical capability for autonomous driving.
We propose a probabilistic driving model with ultiperception capability utilizing the information from the camera, lidar and radar.
The results suggest that our proposed model outperforms baselines and achieves excellent generalization performance in unseen environments.
arXiv Detail & Related papers (2020-05-05T03:48:10Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.