Motion Policy Networks
- URL: http://arxiv.org/abs/2210.12209v1
- Date: Fri, 21 Oct 2022 19:37:09 GMT
- Title: Motion Policy Networks
- Authors: Adam Fishman, Adithyavairan Murali, Clemens Eppner, Bryan Peele, Byron
Boots, Dieter Fox
- Abstract summary: We present an end-to-end neural model called Motion Policy Networks (M$pi$Nets) to generate collision-free, smooth motion from a single depth camera observation.
Our experiments show that M$pi$Nets are significantly faster than global planners while exhibiting the reactivity needed to deal with dynamic scenes.
- Score: 61.87789591369106
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Collision-free motion generation in unknown environments is a core building
block for robot manipulation. Generating such motions is challenging due to
multiple objectives; not only should the solutions be optimal, the motion
generator itself must be fast enough for real-time performance and reliable
enough for practical deployment. A wide variety of methods have been proposed
ranging from local controllers to global planners, often being combined to
offset their shortcomings. We present an end-to-end neural model called Motion
Policy Networks (M$\pi$Nets) to generate collision-free, smooth motion from
just a single depth camera observation. M$\pi$Nets are trained on over 3
million motion planning problems in over 500,000 environments. Our experiments
show that M$\pi$Nets are significantly faster than global planners while
exhibiting the reactivity needed to deal with dynamic scenes. They are 46%
better than prior neural planners and more robust than local control policies.
Despite being only trained in simulation, M$\pi$Nets transfer well to the real
robot with noisy partial point clouds. Code and data are publicly available at
https://mpinets.github.io.
Related papers
- Generalizability of Graph Neural Networks for Decentralized Unlabeled Motion Planning [72.86540018081531]
Unlabeled motion planning involves assigning a set of robots to target locations while ensuring collision avoidance.
This problem forms an essential building block for multi-robot systems in applications such as exploration, surveillance, and transportation.
We address this problem in a decentralized setting where each robot knows only the positions of its $k$-nearest robots and $k$-nearest targets.
arXiv Detail & Related papers (2024-09-29T23:57:25Z) - Neural MP: A Generalist Neural Motion Planner [75.82675575009077]
We seek to do the same by applying data-driven learning at scale to the problem of motion planning.
Our approach builds a large number of complex scenes in simulation, collects expert data from a motion planner, then distills it into a reactive generalist policy.
We perform a thorough evaluation of our method on 64 motion planning tasks across four diverse environments.
arXiv Detail & Related papers (2024-09-09T17:59:45Z) - CabiNet: Scaling Neural Collision Detection for Object Rearrangement
with Procedural Scene Generation [54.68738348071891]
We first generate over 650K cluttered scenes - orders of magnitude more than prior work - in diverse everyday environments.
We render synthetic partial point clouds from this data and use it to train our CabiNet model architecture.
CabiNet is a collision model that accepts object and scene point clouds, captured from a single-view depth observation.
arXiv Detail & Related papers (2023-04-18T21:09:55Z) - Robot Motion Planning as Video Prediction: A Spatio-Temporal Neural
Network-based Motion Planner [16.26965535164238]
Neural network (NN)-based methods have emerged as an attractive approach for robot motion planning due to strong learning capabilities of NN models and their inherently high parallelism.
We propose Neural-Net, an end-to-end learning framework that can fully extract and leverage important-temporal information to form an efficient neural motion planner.
arXiv Detail & Related papers (2022-08-24T03:45:27Z) - N$^2$M$^2$: Learning Navigation for Arbitrary Mobile Manipulation
Motions in Unseen and Dynamic Environments [9.079709086741987]
We introduce Neural Navigation for Mobile Manipulation (N$2$M$2$) which extends this decomposition to complex obstacle environments.
The resulting approach can perform unseen, long-horizon tasks in unexplored environments while instantly reacting to dynamic obstacles and environmental changes.
We demonstrate the capabilities of our proposed approach in extensive simulation and real-world experiments on multiple kinematically diverse mobile manipulators.
arXiv Detail & Related papers (2022-06-17T12:52:41Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - An advantage actor-critic algorithm for robotic motion planning in dense
and dynamic scenarios [0.8594140167290099]
In this paper, we modify existing advantage actor-critic algorithm and suit it to complex motion planning.
It achieves higher success rate in motion planning with lesser processing time for robot to reach its goal.
arXiv Detail & Related papers (2021-02-05T12:30:23Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z) - MotionNet: Joint Perception and Motion Prediction for Autonomous Driving
Based on Bird's Eye View Maps [34.24949016811546]
We propose an efficient deep model, called MotionNet, to jointly perform perception and motion prediction from 3D point clouds.
MotionNet takes a sequence of sweeps as input and outputs a bird's eye view (BEV) map, which encodes the object category and motion information in each grid cell.
arXiv Detail & Related papers (2020-03-15T04:37:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.