EmbodiedAgent: A Scalable Hierarchical Approach to Overcome Practical Challenge in Multi-Robot Control
- URL: http://arxiv.org/abs/2504.10030v1
- Date: Mon, 14 Apr 2025 09:33:42 GMT
- Title: EmbodiedAgent: A Scalable Hierarchical Approach to Overcome Practical Challenge in Multi-Robot Control
- Authors: Hanwen Wan, Yifei Chen, Zeyu Wei, Dongrui Li, Zexin Lin, Donghao Wu, Jiu Cheng, Yuxiang Zhang, Xiaoqiang Ji,
- Abstract summary: EmbodiedAgent is a hierarchical framework for heterogeneous multi-robot control.<n>Our approach integrates a next-action prediction paradigm with a structured memory system to decompose tasks into executable robot skills.<n>We present MultiPlan+, a dataset of more than 18,000 annotated planning instances spanning 100 scenarios.
- Score: 4.163413782205929
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces EmbodiedAgent, a hierarchical framework for heterogeneous multi-robot control. EmbodiedAgent addresses critical limitations of hallucination in impractical tasks. Our approach integrates a next-action prediction paradigm with a structured memory system to decompose tasks into executable robot skills while dynamically validating actions against environmental constraints. We present MultiPlan+, a dataset of more than 18,000 annotated planning instances spanning 100 scenarios, including a subset of impractical cases to mitigate hallucination. To evaluate performance, we propose the Robot Planning Assessment Schema (RPAS), combining automated metrics with LLM-aided expert grading. Experiments demonstrate EmbodiedAgent's superiority over state-of-the-art models, achieving 71.85% RPAS score. Real-world validation in an office service task highlights its ability to coordinate heterogeneous robots for long-horizon objectives.
Related papers
- OmniEAR: Benchmarking Agent Reasoning in Embodied Tasks [52.87238755666243]
We present OmniEAR, a framework for evaluating how language models reason about physical interactions, tool usage, and multi-agent coordination in embodied tasks.<n>We model continuous physical properties and complex spatial relationships across 1,500 scenarios spanning household and industrial domains.<n>Our systematic evaluation reveals severe performance degradation when models must reason from constraints.
arXiv Detail & Related papers (2025-08-07T17:54:15Z) - Automated Generation of Diverse Courses of Actions for Multi-Agent Operations using Binary Optimization and Graph Learning [7.491865419760499]
This paper presents a new theoretical formulation and computational framework to generate diverse pools of COAs for operations with soft variations in agent-task compatibility.<n>Tests of the COA generation process in a simulated environment demonstrate significant performance gain over a random walk baseline, small optimality gap in task sequencing, and execution time of about 50 minutes to plan up to 20 COAs for 5 agent/100 task operations.
arXiv Detail & Related papers (2025-06-24T21:58:30Z) - What Limits Virtual Agent Application? OmniBench: A Scalable Multi-Dimensional Benchmark for Essential Virtual Agent Capabilities [56.646832992178105]
We introduce OmniBench, a cross-platform, graph-based benchmark with an automated pipeline for synthesizing tasks of controllable complexity.<n>We present OmniEval, a multidimensional evaluation framework that includes subtask-level evaluation, graph-based metrics, and comprehensive tests across 10 capabilities.<n>Our dataset contains 36k graph-structured tasks across 20 scenarios, achieving a 91% human acceptance rate.
arXiv Detail & Related papers (2025-06-10T15:59:38Z) - From Virtual Agents to Robot Teams: A Multi-Robot Framework Evaluation in High-Stakes Healthcare Context [2.016235597066821]
Current frameworks treat agents as conceptual task executors rather than physically embodied entities.<n>We propose three design guidelines emphasizing process transparency, proactive failure recovery, and contextual grounding.<n>Our work informs the development of more resilient and robust multi-agent robotic systems.
arXiv Detail & Related papers (2025-06-04T04:05:38Z) - REMAC: Self-Reflective and Self-Evolving Multi-Agent Collaboration for Long-Horizon Robot Manipulation [57.628771707989166]
We propose an adaptive multi-agent planning framework, termed REMAC, that enables efficient, scene-agnostic multi-robot long-horizon task planning and execution.<n>ReMAC incorporates two key modules: a self-reflection module performing pre-conditions and post-condition checks in the loop to evaluate progress and refine plans, and a self-evolvement module dynamically adapting plans based on scene-specific reasoning.
arXiv Detail & Related papers (2025-03-28T03:51:40Z) - A Task and Motion Planning Framework Using Iteratively Deepened AND/OR Graph Networks [3.635602838654497]
We present an approach for integrated task and motion planning based on an AND/OR graph network.
We leverage it to implement different classes of task and motion planning problems (TAMP)
The approach is evaluated and validated both in simulation and with a real dual-arm robot manipulator, that is, Baxter from Rethink Robotics.
arXiv Detail & Related papers (2025-03-10T17:28:22Z) - GRAPE: Generalizing Robot Policy via Preference Alignment [58.419992317452376]
We present GRAPE: Generalizing Robot Policy via Preference Alignment.
We show GRAPE increases success rates on in-domain and unseen manipulation tasks by 51.79% and 58.20%, respectively.
GRAPE can be aligned with various objectives, such as safety and efficiency, reducing collision rates by 37.44% and rollout step-length by 11.15%, respectively.
arXiv Detail & Related papers (2024-11-28T18:30:10Z) - EMOS: Embodiment-aware Heterogeneous Multi-robot Operating System with LLM Agents [33.77674812074215]
We introduce a novel multi-agent framework designed to enable effective collaboration among heterogeneous robots.<n>We propose a self-prompted approach, where agents comprehend robot URDF files and call robot kinematics tools to generate descriptions of their physics capabilities.<n>The Habitat-MAS benchmark is designed to assess how a multi-agent framework handles tasks that require embodiment-aware reasoning.
arXiv Detail & Related papers (2024-10-30T03:20:01Z) - ConceptAgent: LLM-Driven Precondition Grounding and Tree Search for Robust Task Planning and Execution [33.252158560173655]
ConceptAgent is a natural language-driven robotic platform designed for task execution in unstructured environments.
We present innovations designed to limit shortcomings, including 1) Predicate Grounding to prevent and recover from infeasible actions, and 2) an embodied version of LLM-guided Monte Carlo Tree Search with self reflection.
arXiv Detail & Related papers (2024-10-08T15:05:40Z) - COHERENT: Collaboration of Heterogeneous Multi-Robot System with Large Language Models [49.24666980374751]
COHERENT is a novel LLM-based task planning framework for collaboration of heterogeneous multi-robot systems.<n>A Proposal-Execution-Feedback-Adjustment mechanism is designed to decompose and assign actions for individual robots.<n>The experimental results show that our work surpasses the previous methods by a large margin in terms of success rate and execution efficiency.
arXiv Detail & Related papers (2024-09-23T15:53:41Z) - A Meta-Engine Framework for Interleaved Task and Motion Planning using Topological Refinements [51.54559117314768]
Task And Motion Planning (TAMP) is the problem of finding a solution to an automated planning problem.
We propose a general and open-source framework for modeling and benchmarking TAMP problems.
We introduce an innovative meta-technique to solve TAMP problems involving moving agents and multiple task-state-dependent obstacles.
arXiv Detail & Related papers (2024-08-11T14:57:57Z) - RH20T-P: A Primitive-Level Robotic Dataset Towards Composable Generalization Agents [105.13169239919272]
We propose RH20T-P, a primitive-level robotic manipulation dataset.<n>It contains about 38k video clips covering 67 diverse manipulation tasks in real-world scenarios.<n>We standardize a plan-execute CGA paradigm and implement an exemplar baseline called RA-P on our RH20T-P.
arXiv Detail & Related papers (2024-03-28T17:42:54Z) - ProcTHOR: Large-Scale Embodied AI Using Procedural Generation [55.485985317538194]
ProcTHOR is a framework for procedural generation of Embodied AI environments.
We demonstrate state-of-the-art results across 6 embodied AI benchmarks for navigation, rearrangement, and arm manipulation.
arXiv Detail & Related papers (2022-06-14T17:09:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.