Learning to Search in Task and Motion Planning with Streams
- URL: http://arxiv.org/abs/2111.13144v6
- Date: Wed, 23 Aug 2023 11:56:26 GMT
- Title: Learning to Search in Task and Motion Planning with Streams
- Authors: Mohamed Khodeir and Ben Agro and Florian Shkurti
- Abstract summary: Task and motion planning problems in robotics combine symbolic planning over discrete task variables with motion optimization over continuous state and action variables.
We propose a geometrically informed symbolic planner that expands the set of objects and facts in a best-first manner.
We apply our algorithm on a 7DOF robotic arm in block-stacking manipulation tasks.
- Score: 20.003445874753233
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Task and motion planning problems in robotics combine symbolic planning over
discrete task variables with motion optimization over continuous state and
action variables. Recent works such as PDDLStream have focused on optimistic
planning with an incrementally growing set of objects until a feasible
trajectory is found. However, this set is exhaustively expanded in a
breadth-first manner, regardless of the logical and geometric structure of the
problem at hand, which makes long-horizon reasoning with large numbers of
objects prohibitively time-consuming. To address this issue, we propose a
geometrically informed symbolic planner that expands the set of objects and
facts in a best-first manner, prioritized by a Graph Neural Network that is
learned from prior search computations. We evaluate our approach on a diverse
set of problems and demonstrate an improved ability to plan in difficult
scenarios. We also apply our algorithm on a 7DOF robotic arm in block-stacking
manipulation tasks.
Related papers
- A Meta-Engine Framework for Interleaved Task and Motion Planning using Topological Refinements [51.54559117314768]
Task And Motion Planning (TAMP) is the problem of finding a solution to an automated planning problem.
We propose a general and open-source framework for modeling and benchmarking TAMP problems.
We introduce an innovative meta-technique to solve TAMP problems involving moving agents and multiple task-state-dependent obstacles.
arXiv Detail & Related papers (2024-08-11T14:57:57Z) - Optimal Integrated Task and Path Planning and Its Application to
Multi-Robot Pickup and Delivery [10.530860023128406]
We propose a generic multi-robot planning mechanism that combines an optimal task planner and an optimal path planner.
The Integrated planner, through the interaction of the task planner and the path planner, produces optimal collision-free trajectories for the robots.
arXiv Detail & Related papers (2024-03-02T17:48:40Z) - Unified Task and Motion Planning using Object-centric Abstractions of
Motion Constraints [56.283944756315066]
We propose an alternative TAMP approach that unifies task and motion planning into a single search.
Our approach is based on an object-centric abstraction of motion constraints that permits leveraging the computational efficiency of off-the-shelf AI search to yield physically feasible plans.
arXiv Detail & Related papers (2023-12-29T14:00:20Z) - Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - Optimal task and motion planning and execution for human-robot
multi-agent systems in dynamic environments [54.39292848359306]
We propose a combined task and motion planning approach to optimize sequencing, assignment, and execution of tasks.
The framework relies on decoupling tasks and actions, where an action is one possible geometric realization of a symbolic task.
We demonstrate the approach effectiveness in a collaborative manufacturing scenario, in which a robotic arm and a human worker shall assemble a mosaic.
arXiv Detail & Related papers (2023-03-27T01:50:45Z) - Sequential Manipulation Planning on Scene Graph [90.28117916077073]
We devise a 3D scene graph representation, contact graph+ (cg+), for efficient sequential task planning.
Goal configurations, naturally specified on contact graphs, can be produced by a genetic algorithm with an optimization method.
A task plan is then succinct by computing the Graph Editing Distance (GED) between the initial contact graphs and the goal configurations, which generates graph edit operations corresponding to possible robot actions.
arXiv Detail & Related papers (2022-07-10T02:01:33Z) - Task Scoping: Generating Task-Specific Abstractions for Planning [19.411900372400183]
Planning to solve any specific task using an open-scope world model is computationally intractable.
We propose task scoping: a method that exploits knowledge of the initial condition, goal condition, and transition-dynamics structure of a task.
We prove that task scoping never deletes relevant factors or actions, characterize its computational complexity, and characterize the planning problems for which it is especially useful.
arXiv Detail & Related papers (2020-10-17T21:19:25Z) - Planning with Learned Object Importance in Large Problem Instances using
Graph Neural Networks [28.488201307961624]
Real-world planning problems often involve hundreds or even thousands of objects.
We propose a graph neural network architecture for predicting object importance in a single inference pass.
Our approach treats the planner and transition model as black boxes, and can be used with any off-the-shelf planner.
arXiv Detail & Related papers (2020-09-11T18:55:08Z) - Deep Visual Reasoning: Learning to Predict Action Sequences for Task and
Motion Planning from an Initial Scene Image [43.05971157389743]
We propose a deep convolutional recurrent neural network that predicts action sequences for task and motion planning (TAMP) from an initial scene image.
A key aspect is that our method generalizes to scenes with many and varying number of objects, although being trained on only two objects at a time.
arXiv Detail & Related papers (2020-06-09T16:52:02Z) - Modeling Long-horizon Tasks as Sequential Interaction Landscapes [75.5824586200507]
We present a deep learning network that learns dependencies and transitions across subtasks solely from a set of demonstration videos.
We show that these symbols can be learned and predicted directly from image observations.
We evaluate our framework on two long horizon tasks: (1) block stacking of puzzle pieces being executed by humans, and (2) a robot manipulation task involving pick and place of objects and sliding a cabinet door with a 7-DoF robot arm.
arXiv Detail & Related papers (2020-06-08T18:07:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.