Neural MP: A Generalist Neural Motion Planner
- URL: http://arxiv.org/abs/2409.05864v1
- Date: Mon, 9 Sep 2024 17:59:45 GMT
- Title: Neural MP: A Generalist Neural Motion Planner
- Authors: Murtaza Dalal, Jiahui Yang, Russell Mendonca, Youssef Khaky, Ruslan Salakhutdinov, Deepak Pathak,
- Abstract summary: We seek to do the same by applying data-driven learning at scale to the problem of motion planning.
Our approach builds a large number of complex scenes in simulation, collects expert data from a motion planner, then distills it into a reactive generalist policy.
We perform a thorough evaluation of our method on 64 motion planning tasks across four diverse environments.
- Score: 75.82675575009077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The current paradigm for motion planning generates solutions from scratch for every new problem, which consumes significant amounts of time and computational resources. For complex, cluttered scenes, motion planning approaches can often take minutes to produce a solution, while humans are able to accurately and safely reach any goal in seconds by leveraging their prior experience. We seek to do the same by applying data-driven learning at scale to the problem of motion planning. Our approach builds a large number of complex scenes in simulation, collects expert data from a motion planner, then distills it into a reactive generalist policy. We then combine this with lightweight optimization to obtain a safe path for real world deployment. We perform a thorough evaluation of our method on 64 motion planning tasks across four diverse environments with randomized poses, scenes and obstacles, in the real world, demonstrating an improvement of 23%, 17% and 79% motion planning success rate over state of the art sampling, optimization and learning based planning methods. Video results available at mihdalal.github.io/neuralmotionplanner
Related papers
- A Meta-Engine Framework for Interleaved Task and Motion Planning using Topological Refinements [51.54559117314768]
Task And Motion Planning (TAMP) is the problem of finding a solution to an automated planning problem.
We propose a general and open-source framework for modeling and benchmarking TAMP problems.
We introduce an innovative meta-technique to solve TAMP problems involving moving agents and multiple task-state-dependent obstacles.
arXiv Detail & Related papers (2024-08-11T14:57:57Z) - PlaMo: Plan and Move in Rich 3D Physical Environments [68.75982381673869]
We present PlaMo, a scene-aware path planner and a robust physics-based controller.
The planner produces a sequence of motion paths, considering the various limitations the scene imposes on the motion.
Our control policy generates rich and realistic physical motion adhering to the plan.
arXiv Detail & Related papers (2024-06-26T10:41:07Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - Predicting Motion Plans for Articulating Everyday Objects [16.0496453009462]
We develop a motion simulator that simulates articulated objects placed in real scenes.
We then introduce SeqIK+$theta$, a fast and flexible representation for motion plans.
We learn models that use SeqIK+$theta$ to quickly predict motion plans for articulating novel objects at test time.
arXiv Detail & Related papers (2023-03-02T18:45:02Z) - Sequence-Based Plan Feasibility Prediction for Efficient Task and Motion
Planning [36.300564378022315]
We present a learning-enabled Task and Motion Planning (TAMP) algorithm for solving mobile manipulation problems in environments with many articulated and movable obstacles.
The core of our algorithm is PIGINet, a novel Transformer-based learning method that takes in a task plan, the goal, and the initial state, and predicts the probability of finding motion trajectories associated with the task plan.
arXiv Detail & Related papers (2022-11-03T04:12:04Z) - Learning-based Motion Planning in Dynamic Environments Using GNNs and
Temporal Encoding [15.58317292680615]
We propose a GNN-based approach that uses temporal encoding and imitation learning with data aggregation for learning both the embeddings and the edge prioritization policies.
Experiments show that the proposed methods can significantly accelerate online planning over state-of-the-art complete dynamic planning algorithms.
arXiv Detail & Related papers (2022-10-16T01:27:16Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - Learning to Search in Task and Motion Planning with Streams [20.003445874753233]
Task and motion planning problems in robotics combine symbolic planning over discrete task variables with motion optimization over continuous state and action variables.
We propose a geometrically informed symbolic planner that expands the set of objects and facts in a best-first manner.
We apply our algorithm on a 7DOF robotic arm in block-stacking manipulation tasks.
arXiv Detail & Related papers (2021-11-25T15:58:31Z) - Distilling Motion Planner Augmented Policies into Visual Control
Policies for Robot Manipulation [26.47544415550067]
We propose to distill a state-based motion planner augmented policy to a visual control policy.
We evaluate our method on three manipulation tasks in obstructed environments.
Our framework is highly sample-efficient and outperforms the state-of-the-art algorithms.
arXiv Detail & Related papers (2021-11-11T18:52:00Z) - Learning to Segment Rigid Motions from Two Frames [72.14906744113125]
We propose a modular network, motivated by a geometric analysis of what independent object motions can be recovered from an egomotion field.
It takes two consecutive frames as input and predicts segmentation masks for the background and multiple rigidly moving objects, which are then parameterized by 3D rigid transformations.
Our method achieves state-of-the-art performance for rigid motion segmentation on KITTI and Sintel.
arXiv Detail & Related papers (2021-01-11T04:20:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.