Learning-based 3D Occupancy Prediction for Autonomous Navigation in
Occluded Environments
- URL: http://arxiv.org/abs/2011.03981v2
- Date: Sat, 27 Mar 2021 10:54:16 GMT
- Title: Learning-based 3D Occupancy Prediction for Autonomous Navigation in
Occluded Environments
- Authors: Lizi Wang, Hongkai Ye, Qianhao Wang, Yuman Gao, Chao Xu and Fei Gao
- Abstract summary: We propose a method based on deep neural network to predict occupancy distribution of unknown space reliably.
We use unlabeled and no-ground-truth data to train our network and successfully apply it to real-time navigation in unseen environments without any refinement.
- Score: 7.825273522024438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In autonomous navigation of mobile robots, sensors suffer from massive
occlusion in cluttered environments, leaving significant amount of space
unknown during planning. In practice, treating the unknown space in optimistic
or pessimistic ways both set limitations on planning performance, thus
aggressiveness and safety cannot be satisfied at the same time. However, humans
can infer the exact shape of the obstacles from only partial observation and
generate non-conservative trajectories that avoid possible collisions in
occluded space. Mimicking human behavior, in this paper, we propose a method
based on deep neural network to predict occupancy distribution of unknown space
reliably. Specifically, the proposed method utilizes contextual information of
environments and learns from prior knowledge to predict obstacle distributions
in occluded space. We use unlabeled and no-ground-truth data to train our
network and successfully apply it to real-time navigation in unseen
environments without any refinement. Results show that our method leverages the
performance of a kinodynamic planner by improving security with no reduction of
speed in clustered environments.
Related papers
- OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Belief Aided Navigation using Bayesian Reinforcement Learning for Avoiding Humans in Blind Spots [0.0]
This study introduces a novel algorithm, BNBRL+, predicated on the partially observable Markov decision process framework to assess risks in unobservable areas.
It integrates the dynamics between the robot, humans, and inferred beliefs to determine the navigation paths and embeds social norms within the reward function.
The model's ability to navigate effectively in spaces with limited visibility and avoid obstacles dynamically can significantly improve the safety and reliability of autonomous vehicles.
arXiv Detail & Related papers (2024-03-15T08:50:39Z) - Cooperative Probabilistic Trajectory Forecasting under Occlusion [110.4960878651584]
Occlusion-aware planning often requires communicating the information of the occluded object to the ego agent for safe navigation.
In this paper, we design an end-to-end network that cooperatively estimates the current states of occluded pedestrian in the reference frame of ego agent.
We show that the uncertainty-aware trajectory prediction of occluded pedestrian by the ego agent is almost similar to the ground truth trajectory assuming no occlusion.
arXiv Detail & Related papers (2023-12-06T05:36:52Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Online search of unknown terrains using a dynamical system-based path
planning approach [0.0]
This study introduces a new scalable technique that helps a robot to steer away from the obstacles and cover the entire space in a short period of time.
Using this technique resulted in 49% boost, on average, in the robot's performance compared to the state-of-the-art planners.
arXiv Detail & Related papers (2021-03-22T14:00:04Z) - Online Planning in Uncertain and Dynamic Environment in the Presence of
Multiple Mobile Vehicles [5.894659354028797]
We investigate the autonomous navigation of a mobile robot in the presence of other moving vehicles under time-varying uncertain environmental disturbances.
We first predict the future state distributions of other vehicles to account for their uncertain behaviors affected by the time-varying disturbances.
We then construct a dynamic-obstacle-aware reachable space that contains states with high probabilities to be reached by the robot.
arXiv Detail & Related papers (2020-09-08T13:27:57Z) - Autonomous Exploration Under Uncertainty via Deep Reinforcement Learning
on Graphs [5.043563227694137]
We consider an autonomous exploration problem in which a range-sensing mobile robot is tasked with accurately mapping the landmarks in an a priori unknown environment efficiently in real-time.
We propose a novel approach that uses graph neural networks (GNNs) in conjunction with deep reinforcement learning (DRL), enabling decision-making over graphs containing exploration information to predict a robot's optimal sensing action in belief space.
arXiv Detail & Related papers (2020-07-24T16:50:41Z) - Online Mapping and Motion Planning under Uncertainty for Safe Navigation
in Unknown Environments [3.2296078260106174]
This manuscript proposes an uncertainty-based framework for mapping and planning feasible motions online with probabilistic safety-guarantees.
The proposed approach deals with the motion, probabilistic safety, and online computation constraints by: (i) mapping the surroundings to build an uncertainty-aware representation of the environment, and (ii) iteratively (re)planning to goal that are kinodynamically feasible and probabilistically safe through a multi-layered sampling-based planner in the belief space.
arXiv Detail & Related papers (2020-04-26T08:53:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.