Model-Based AI planning and Execution Systems for Robotics
- URL: http://arxiv.org/abs/2505.04493v1
- Date: Wed, 07 May 2025 15:17:38 GMT
- Title: Model-Based AI planning and Execution Systems for Robotics
- Authors: Or Wertheim, Ronen I. Brafman,
- Abstract summary: Model-based planning and execution systems offer a principled approach to building flexible autonomous robots.<n>We consider the diverse design choices and issues existing systems attempt to address, the different solutions proposed so far, and suggest avenues for future development.
- Score: 8.287206589886878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model-based planning and execution systems offer a principled approach to building flexible autonomous robots that can perform diverse tasks by automatically combining a host of basic skills. This idea is almost as old as modern robotics. Yet, while diverse general-purpose reasoning architectures have been proposed since, general-purpose systems that are integrated with modern robotic platforms have emerged only recently, starting with the influential ROSPlan system. Since then, a growing number of model-based systems for robot task-level control have emerged. In this paper, we consider the diverse design choices and issues existing systems attempt to address, the different solutions proposed so far, and suggest avenues for future development.
Related papers
- Embodied AI with Foundation Models for Mobile Service Robots: A Systematic Review [4.540236408836132]
We present the first systematic review of the integration of foundation models in mobile service robotics.<n>We explore the role of such models in enabling real-time sensor fusion, language-conditioned control, and adaptive task execution.<n>We also discuss real-world applications in the domestic assistance, healthcare, and service automation sectors.
arXiv Detail & Related papers (2025-05-26T20:08:09Z) - AI Automatons: AI Systems Intended to Imitate Humans [54.19152688545896]
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness.<n>The research, design, deployment, and availability of such AI systems have prompted growing concerns about a wide range of possible legal, ethical, and other social impacts.
arXiv Detail & Related papers (2025-03-04T03:55:38Z) - $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - EMOS: Embodiment-aware Heterogeneous Multi-robot Operating System with LLM Agents [33.77674812074215]
We introduce a novel multi-agent framework designed to enable effective collaboration among heterogeneous robots.<n>We propose a self-prompted approach, where agents comprehend robot URDF files and call robot kinematics tools to generate descriptions of their physics capabilities.<n>The Habitat-MAS benchmark is designed to assess how a multi-agent framework handles tasks that require embodiment-aware reasoning.
arXiv Detail & Related papers (2024-10-30T03:20:01Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis [82.59451639072073]
General-purpose robots operate seamlessly in any environment, with any object, and utilize various skills to complete diverse tasks.
As a community, we have been constraining most robotic systems by designing them for specific tasks, training them on specific datasets, and deploying them within specific environments.
Motivated by the impressive open-set performance and content generation capabilities of web-scale, large-capacity pre-trained models, we devote this survey to exploring how foundation models can be applied to general-purpose robotics.
arXiv Detail & Related papers (2023-12-14T10:02:55Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Introduction to Latent Variable Energy-Based Models: A Path Towards
Autonomous Machine Intelligence [13.27120983899836]
We summarize the main ideas behind the architecture of autonomous intelligence of the future proposed by Yann LeCun.
In particular, we introduce energy-based and latent variable models and combine their advantages in the building block of LeCun's proposal.
arXiv Detail & Related papers (2023-06-05T03:55:26Z) - The Need for a Meta-Architecture for Robot Autonomy [0.0]
Long-term autonomy of robotic systems implicitly requires platforms that are able to handle faults, problems in behaviors, or lack of knowledge.
We put forward the case for a generative model of cognitive architectures for autonomous robotic agents that subscribes to the principles of model-based engineering and certifiable dependability.
arXiv Detail & Related papers (2022-07-20T07:27:23Z) - Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic
Platforms [60.59764170868101]
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform.
We formulate it as a few-shot meta-learning problem where the goal is to find a model that captures the common structure shared across different robotic platforms.
We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots.
arXiv Detail & Related papers (2021-03-05T14:16:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.