A Task and Motion Planning Framework Using Iteratively Deepened AND/OR Graph Networks
- URL: http://arxiv.org/abs/2503.07700v1
- Date: Mon, 10 Mar 2025 17:28:22 GMT
- Title: A Task and Motion Planning Framework Using Iteratively Deepened AND/OR Graph Networks
- Authors: Hossein Karami, Antony Thomas, Fulvio Mastrogiovanni,
- Abstract summary: We present an approach for integrated task and motion planning based on an AND/OR graph network.<n>We leverage it to implement different classes of task and motion planning problems (TAMP)<n>The approach is evaluated and validated both in simulation and with a real dual-arm robot manipulator, that is, Baxter from Rethink Robotics.
- Score: 3.635602838654497
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present an approach for integrated task and motion planning based on an AND/OR graph network, which is used to represent task-level states and actions, and we leverage it to implement different classes of task and motion planning problems (TAMP). Several problems that fall under task and motion planning do not have a predetermined number of sub-tasks to achieve a goal. For example, while retrieving a target object from a cluttered workspace, in principle the number of object re-arrangements required to finally grasp it cannot be known ahead of time. To address this challenge, and in contrast to traditional planners, also those based on AND/OR graphs, we grow the AND/OR graph at run-time by progressively adding sub-graphs until grasping the target object becomes feasible, which yields a network of AND/OR graphs. The approach is extended to enable multi-robot task and motion planning, and (i) it allows us to perform task allocation while coordinating the activity of a given number of robots, and (ii) can handle multi-robot tasks involving an a priori unknown number of sub-tasks. The approach is evaluated and validated both in simulation and with a real dual-arm robot manipulator, that is, Baxter from Rethink Robotics. In particular, for the single-robot task and motion planning, we validated our approach in three different TAMP domains. Furthermore, we also use three different robots for simulation, namely, Baxter, Franka Emika Panda manipulators, and a PR2 robot. Experiments show that our approach can be readily scaled to scenarios with many objects and robots, and is capable of handling different classes of TAMP problems.
Related papers
- REMAC: Self-Reflective and Self-Evolving Multi-Agent Collaboration for Long-Horizon Robot Manipulation [57.628771707989166]
We propose an adaptive multi-agent planning framework, termed REMAC, that enables efficient, scene-agnostic multi-robot long-horizon task planning and execution.
ReMAC incorporates two key modules: a self-reflection module performing pre-conditions and post-condition checks in the loop to evaluate progress and refine plans, and a self-evolvement module dynamically adapting plans based on scene-specific reasoning.
arXiv Detail & Related papers (2025-03-28T03:51:40Z) - COHERENT: Collaboration of Heterogeneous Multi-Robot System with Large Language Models [49.24666980374751]
COHERENT is a novel LLM-based task planning framework for collaboration of heterogeneous multi-robot systems.
A Proposal-Execution-Feedback-Adjustment mechanism is designed to decompose and assign actions for individual robots.
The experimental results show that our work surpasses the previous methods by a large margin in terms of success rate and execution efficiency.
arXiv Detail & Related papers (2024-09-23T15:53:41Z) - A Meta-Engine Framework for Interleaved Task and Motion Planning using Topological Refinements [51.54559117314768]
Task And Motion Planning (TAMP) is the problem of finding a solution to an automated planning problem.
We propose a general and open-source framework for modeling and benchmarking TAMP problems.
We introduce an innovative meta-technique to solve TAMP problems involving moving agents and multiple task-state-dependent obstacles.
arXiv Detail & Related papers (2024-08-11T14:57:57Z) - SAM-E: Leveraging Visual Foundation Model with Sequence Imitation for Embodied Manipulation [62.58480650443393]
Segment Anything (SAM) is a vision-foundation model for generalizable scene understanding and sequence imitation.
We develop a novel multi-channel heatmap that enables the prediction of the action sequence in a single pass.
arXiv Detail & Related papers (2024-05-30T00:32:51Z) - Probabilistically Correct Language-based Multi-Robot Planning using Conformal Prediction [11.614036749291216]
We introduce a new distributed multi-robot planner called S-ATLAS for Safe plAnning for Teams of Language-instructed AgentS.
We show that the proposed planner can achieve user-specified task success rates, assuming successful plan execution.
We provide comparative experiments against related works showing that our method is significantly more computational efficient and achieves lower help rates.
arXiv Detail & Related papers (2024-02-23T15:02:44Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - Learning to Reason over Scene Graphs: A Case Study of Finetuning GPT-2
into a Robot Language Model for Grounded Task Planning [45.51792981370957]
We investigate the applicability of a smaller class of large language models (LLMs) in robotic task planning by learning to decompose tasks into subgoal specifications for a planner to execute sequentially.
Our method grounds the input of the LLM on the domain that is represented as a scene graph, enabling it to translate human requests into executable robot plans.
Our findings suggest that the knowledge stored in an LLM can be effectively grounded to perform long-horizon task planning, demonstrating the promising potential for the future application of neuro-symbolic planning methods in robotics.
arXiv Detail & Related papers (2023-05-12T18:14:32Z) - ProgPrompt: Generating Situated Robot Task Plans using Large Language
Models [68.57918965060787]
Large language models (LLMs) can be used to score potential next actions during task planning.
We present a programmatic LLM prompt structure that enables plan generation functional across situated environments.
arXiv Detail & Related papers (2022-09-22T20:29:49Z) - Long-Horizon Planning and Execution with Functional Object-Oriented
Networks [79.94575713911189]
We introduce the idea of exploiting object-level knowledge as a FOON for task planning and execution.
Our approach automatically transforms FOON into PDDL and leverages off-the-shelf planners, action contexts, and robot skills.
We demonstrate our approach on long-horizon tasks in CoppeliaSim and show how learned action contexts can be extended to never-before-seen scenarios.
arXiv Detail & Related papers (2022-07-12T19:29:35Z) - Learning to Search in Task and Motion Planning with Streams [20.003445874753233]
Task and motion planning problems in robotics combine symbolic planning over discrete task variables with motion optimization over continuous state and action variables.
We propose a geometrically informed symbolic planner that expands the set of objects and facts in a best-first manner.
We apply our algorithm on a 7DOF robotic arm in block-stacking manipulation tasks.
arXiv Detail & Related papers (2021-11-25T15:58:31Z) - A Task-Motion Planning Framework Using Iteratively Deepened AND/OR Graph
Networks [1.3535770763481902]
We present an approach for Task-Motion Planning (TMP) using Iterative Deepened AND/OR Graph Networks (TMP-IDAN)
TMP-IDAN uses an AND/OR graph network based novel abstraction for compactly representing the task-level states and actions.
We validate our approach and evaluate its capabilities using a Baxter robot and a state-of-the-art robotics simulator.
arXiv Detail & Related papers (2021-04-04T07:06:52Z) - Learning Symbolic Operators for Task and Motion Planning [29.639902380586253]
integrated task and motion planners (TAMP) handle the complex interaction between motion-level decisions and task-level plan feasibility.
TAMP approaches rely on domain-specific symbolic operators to guide the task-level search, making planning efficient.
We propose a bottom-up relational learning method for operator learning and show how the learned operators can be used for planning in a TAMP system.
arXiv Detail & Related papers (2021-02-28T19:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.