Acting and Planning with Hierarchical Operational Models on a Mobile Robot: A Study with RAE+UPOM
- URL: http://arxiv.org/abs/2507.11345v1
- Date: Tue, 15 Jul 2025 14:20:26 GMT
- Title: Acting and Planning with Hierarchical Operational Models on a Mobile Robot: A Study with RAE+UPOM
- Authors: Oscar Lima, Marc Vinci, Sunandita Patra, Sebastian Stock, Joachim Hertzberg, Martin Atzmueller, Malik Ghallab, Dana Nau, Paolo Traverso,
- Abstract summary: We present the first physical deployment of an integrated actor-planner system that shares hierarchical operational models for both acting and planning.<n>We implement RAE+UPOM on a mobile manipulator in a real-world deployment for an object collection task.
- Score: 5.758011837296545
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robotic task execution faces challenges due to the inconsistency between symbolic planner models and the rich control structures actually running on the robot. In this paper, we present the first physical deployment of an integrated actor-planner system that shares hierarchical operational models for both acting and planning, interleaving the Reactive Acting Engine (RAE) with an anytime UCT-like Monte Carlo planner (UPOM). We implement RAE+UPOM on a mobile manipulator in a real-world deployment for an object collection task. Our experiments demonstrate robust task execution under action failures and sensor noise, and provide empirical insights into the interleaved acting-and-planning decision making process.
Related papers
- OWMM-Agent: Open World Mobile Manipulation With Multi-modal Agentic Data Synthesis [70.39500621448383]
Open-world mobile manipulation task remains a challenge due to the need for generalization to open-ended instructions and environments.<n>We propose a novel multi-modal agent architecture that maintains multi-view scene frames and agent states for decision-making and controls the robot by function calling.<n>We highlight our fine-tuned OWMM-VLM as the first dedicated foundation model for mobile manipulators with global scene understanding, robot state tracking, and multi-modal action generation in a unified model.
arXiv Detail & Related papers (2025-06-04T17:57:44Z) - Unlocking Smarter Device Control: Foresighted Planning with a World Model-Driven Code Execution Approach [83.21177515180564]
We propose a framework that prioritizes natural language understanding and structured reasoning to enhance the agent's global understanding of the environment.<n>Our method outperforms previous approaches, particularly achieving a 44.4% relative improvement in task success rate.
arXiv Detail & Related papers (2025-05-22T09:08:47Z) - Beyond Task and Motion Planning: Hierarchical Robot Planning with General-Purpose Policies [32.85839714467011]
We address the challenge of planning with both kinematic skills and closed-loop motor controllers.<n>We propose a novel method that integrates these controllers into motion planning using Composable Interaction Primitives.
arXiv Detail & Related papers (2025-04-24T19:22:50Z) - COHERENT: Collaboration of Heterogeneous Multi-Robot System with Large Language Models [49.24666980374751]
COHERENT is a novel LLM-based task planning framework for collaboration of heterogeneous multi-robot systems.<n>A Proposal-Execution-Feedback-Adjustment mechanism is designed to decompose and assign actions for individual robots.<n>The experimental results show that our work surpasses the previous methods by a large margin in terms of success rate and execution efficiency.
arXiv Detail & Related papers (2024-09-23T15:53:41Z) - Grounding Language Models in Autonomous Loco-manipulation Tasks [3.8363685417355557]
We propose a novel framework that learns, selects, and plans behaviors based on tasks in different scenarios.
We leverage the planning and reasoning features of the large language model (LLM), constructing a hierarchical task graph.
Experiments in simulation and real-world using the CENTAURO robot show that the language model based planner can efficiently adapt to new loco-manipulation tasks.
arXiv Detail & Related papers (2024-09-02T15:27:48Z) - Autonomous Behavior Planning For Humanoid Loco-manipulation Through Grounded Language Model [6.9268843428933025]
Large language models (LLMs) have demonstrated powerful planning and reasoning capabilities for comprehension and processing of semantic information.
We propose a novel language-model based framework that enables robots to autonomously plan behaviors and low-level execution under given textual instructions.
arXiv Detail & Related papers (2024-08-15T17:33:32Z) - A Meta-Engine Framework for Interleaved Task and Motion Planning using Topological Refinements [51.54559117314768]
Task And Motion Planning (TAMP) is the problem of finding a solution to an automated planning problem.
We propose a general and open-source framework for modeling and benchmarking TAMP problems.
We introduce an innovative meta-technique to solve TAMP problems involving moving agents and multiple task-state-dependent obstacles.
arXiv Detail & Related papers (2024-08-11T14:57:57Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - CoPAL: Corrective Planning of Robot Actions with Large Language Models [7.944803163555092]
We propose a system architecture that orchestrates a seamless interplay between cognitive levels, encompassing reasoning, planning, and motion generation.<n>At its core lies a novel replanning strategy that handles physically grounded, logical, and semantic errors in the generated plans.
arXiv Detail & Related papers (2023-10-11T07:39:42Z) - Long-Horizon Planning and Execution with Functional Object-Oriented
Networks [79.94575713911189]
We introduce the idea of exploiting object-level knowledge as a FOON for task planning and execution.
Our approach automatically transforms FOON into PDDL and leverages off-the-shelf planners, action contexts, and robot skills.
We demonstrate our approach on long-horizon tasks in CoppeliaSim and show how learned action contexts can be extended to never-before-seen scenarios.
arXiv Detail & Related papers (2022-07-12T19:29:35Z) - Deliberative Acting, Online Planning and Learning with Hierarchical
Operational Models [5.597986898418404]
In AI research, a plan of action has typically used descriptive models of the actions that abstractly specify what might happen as a result of an action.
executing the planned actions has needed operational models, in which rich computational control structures and closed-loop online decision-making are used.
We implement an integrated acting and planning system in which both planning and acting use the same operational models.
arXiv Detail & Related papers (2020-10-02T14:50:05Z) - Goal-Conditioned End-to-End Visuomotor Control for Versatile Skill
Primitives [89.34229413345541]
We propose a conditioning scheme which avoids pitfalls by learning the controller and its conditioning in an end-to-end manner.
Our model predicts complex action sequences based directly on a dynamic image representation of the robot motion.
We report significant improvements in task success over representative MPC and IL baselines.
arXiv Detail & Related papers (2020-03-19T15:04:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.