Robot Motion Planning as Video Prediction: A Spatio-Temporal Neural
Network-based Motion Planner
- URL: http://arxiv.org/abs/2208.11287v1
- Date: Wed, 24 Aug 2022 03:45:27 GMT
- Title: Robot Motion Planning as Video Prediction: A Spatio-Temporal Neural
Network-based Motion Planner
- Authors: Xiao Zang, Miao Yin, Lingyi Huang, Jingjin Yu, Saman Zonouz and Bo
Yuan
- Abstract summary: Neural network (NN)-based methods have emerged as an attractive approach for robot motion planning due to strong learning capabilities of NN models and their inherently high parallelism.
We propose Neural-Net, an end-to-end learning framework that can fully extract and leverage important-temporal information to form an efficient neural motion planner.
- Score: 16.26965535164238
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural network (NN)-based methods have emerged as an attractive approach for
robot motion planning due to strong learning capabilities of NN models and
their inherently high parallelism. Despite the current development in this
direction, the efficient capture and processing of important sequential and
spatial information, in a direct and simultaneous way, is still relatively
under-explored. To overcome the challenge and unlock the potentials of neural
networks for motion planning tasks, in this paper, we propose STP-Net, an
end-to-end learning framework that can fully extract and leverage important
spatio-temporal information to form an efficient neural motion planner. By
interpreting the movement of the robot as a video clip, robot motion planning
is transformed to a video prediction task that can be performed by STP-Net in
both spatially and temporally efficient ways. Empirical evaluations across
different seen and unseen environments show that, with nearly 100% accuracy
(aka, success rate), STP-Net demonstrates very promising performance with
respect to both planning speed and path cost. Compared with existing NN-based
motion planners, STP-Net achieves at least 5x, 2.6x and 1.8x faster speed with
lower path cost on 2D Random Forest, 2D Maze and 3D Random Forest environments,
respectively. Furthermore, STP-Net can quickly and simultaneously compute
multiple near-optimal paths in multi-robot motion planning tasks
Related papers
- Potential Based Diffusion Motion Planning [73.593988351275]
We propose a new approach towards learning potential based motion planning.
We train a neural network to capture and learn an easily optimizable potentials over motion planning trajectories.
We demonstrate its inherent composability, enabling us to generalize to a multitude of different motion constraints.
arXiv Detail & Related papers (2024-07-08T17:48:39Z) - PNAS-MOT: Multi-Modal Object Tracking with Pareto Neural Architecture Search [64.28335667655129]
Multiple object tracking is a critical task in autonomous driving.
As tracking accuracy improves, neural networks become increasingly complex, posing challenges for their practical application in real driving scenarios due to the high level of latency.
In this paper, we explore the use of the neural architecture search (NAS) methods to search for efficient architectures for tracking, aiming for low real-time latency while maintaining relatively high accuracy.
arXiv Detail & Related papers (2024-03-23T04:18:49Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - Progressive Learning for Physics-informed Neural Motion Planning [1.9798034349981157]
Motion planning is one of the core robotics problems requiring fast methods for finding a collision-free robot motion path.
Recent advancements have led to a physics-informed NMP approach that directly solves the Eikonal equation for motion planning.
This paper presents a novel and tractable Eikonal equation formulation and introduces a new progressive learning strategy to train neural networks without expert data.
arXiv Detail & Related papers (2023-06-01T12:41:05Z) - Simultaneous Contact-Rich Grasping and Locomotion via Distributed
Optimization Enabling Free-Climbing for Multi-Limbed Robots [60.06216976204385]
We present an efficient motion planning framework for simultaneously solving locomotion, grasping, and contact problems.
We demonstrate our proposed framework in the hardware experiments, showing that the multi-limbed robot is able to realize various motions including free-climbing at a slope angle 45deg with a much shorter planning time.
arXiv Detail & Related papers (2022-07-04T13:52:10Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Learning Interaction-Aware Trajectory Predictions for Decentralized
Multi-Robot Motion Planning in Dynamic Environments [10.345048137438623]
We introduce a novel trajectory prediction model based on recurrent neural networks (RNN)
We then incorporate the trajectory prediction model into a decentralized model predictive control (MPC) framework for multi-robot collision avoidance.
arXiv Detail & Related papers (2021-02-10T11:11:08Z) - An advantage actor-critic algorithm for robotic motion planning in dense
and dynamic scenarios [0.8594140167290099]
In this paper, we modify existing advantage actor-critic algorithm and suit it to complex motion planning.
It achieves higher success rate in motion planning with lesser processing time for robot to reach its goal.
arXiv Detail & Related papers (2021-02-05T12:30:23Z) - Neural Manipulation Planning on Constraint Manifolds [13.774614900994342]
We present Constrained Motion Planning Networks (CoMPNet), the first neural planner for multimodal kinematic constraints.
We show that CoMPNet solves practical motion planning tasks involving both unconstrained and constrained problems.
It generalizes to new unseen locations of the objects, i.e., not seen during training, in the given environments with high success rates.
arXiv Detail & Related papers (2020-08-09T18:58:10Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z) - MotionNet: Joint Perception and Motion Prediction for Autonomous Driving
Based on Bird's Eye View Maps [34.24949016811546]
We propose an efficient deep model, called MotionNet, to jointly perform perception and motion prediction from 3D point clouds.
MotionNet takes a sequence of sweeps as input and outputs a bird's eye view (BEV) map, which encodes the object category and motion information in each grid cell.
arXiv Detail & Related papers (2020-03-15T04:37:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.