Toward General-Purpose Robots via Foundation Models: A Survey and
Meta-Analysis
- URL: http://arxiv.org/abs/2312.08782v2
- Date: Fri, 15 Dec 2023 18:25:48 GMT
- Title: Toward General-Purpose Robots via Foundation Models: A Survey and
Meta-Analysis
- Authors: Yafei Hu and Quanting Xie and Vidhi Jain and Jonathan Francis and Jay
Patrikar and Nikhil Keetha and Seungchan Kim and Yaqi Xie and Tianyi Zhang
and Shibo Zhao and Yu Quan Chong and Chen Wang and Katia Sycara and Matthew
Johnson-Roberson and Dhruv Batra and Xiaolong Wang and Sebastian Scherer and
Zsolt Kira and Fei Xia and Yonatan Bisk
- Abstract summary: Most existing robotic systems have been designed for specific tasks, trained on specific datasets, and deployed within specific environments.
Motivated by the impressive open-set performance and content generation capabilities of web-scale, large-capacity pre-trained models, we devote this survey to exploring how foundation models can be applied to robotics.
- Score: 73.89558418030418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building general-purpose robots that can operate seamlessly, in any
environment, with any object, and utilizing various skills to complete diverse
tasks has been a long-standing goal in Artificial Intelligence. Unfortunately,
however, most existing robotic systems have been constrained - having been
designed for specific tasks, trained on specific datasets, and deployed within
specific environments. These systems usually require extensively-labeled data,
rely on task-specific models, have numerous generalization issues when deployed
in real-world scenarios, and struggle to remain robust to distribution shifts.
Motivated by the impressive open-set performance and content generation
capabilities of web-scale, large-capacity pre-trained models (i.e., foundation
models) in research fields such as Natural Language Processing (NLP) and
Computer Vision (CV), we devote this survey to exploring (i) how these existing
foundation models from NLP and CV can be applied to the field of robotics, and
also exploring (ii) what a robotics-specific foundation model would look like.
We begin by providing an overview of what constitutes a conventional robotic
system and the fundamental barriers to making it universally applicable. Next,
we establish a taxonomy to discuss current work exploring ways to leverage
existing foundation models for robotics and develop ones catered to robotics.
Finally, we discuss key challenges and promising future directions in using
foundation models for enabling general-purpose robotic systems. We encourage
readers to view our living GitHub repository of resources, including papers
reviewed in this survey as well as related projects and repositories for
developing foundation models for robotics.
Related papers
- Foundation Models for Autonomous Robots in Unstructured Environments [15.517532442044962]
The study systematically reviews application of foundation models in two field of robotic and unstructured environment.
Findings showed that linguistic capabilities of LLMs have been utilized more than other features for improving perception in human-robot interactions.
The use of LLMs demonstrated more applications in project management and safety in construction, and natural hazard detection in disaster management.
arXiv Detail & Related papers (2024-07-19T13:26:52Z) - RH20T-P: A Primitive-Level Robotic Dataset Towards Composable Generalization Agents [107.97394661147102]
The ultimate goals of robotic learning is to acquire a comprehensive and generalizable robotic system.
Recent progress in utilizing language models as high-level planners has demonstrated that the complexity of tasks can be reduced through decomposing them into primitive-level plans.
Despite the promising future, the community is not yet adequately prepared for composable generalization agents.
arXiv Detail & Related papers (2024-03-28T17:42:54Z) - Real-World Robot Applications of Foundation Models: A Review [27.054641862377807]
Recent developments in foundation models, like Large Language Models (LLMs) and Vision-Language Models (VLMs), facilitate flexible application across different tasks and modalities.
This paper provides an overview of the practical application of foundation models in real-world robotics.
arXiv Detail & Related papers (2024-02-08T15:19:50Z) - A Survey on Robotics with Foundation Models: toward Embodied AI [30.999414445286757]
Recent advances in computer vision, natural language processing, and multi-modality learning have shown that the foundation models have superhuman capabilities for specific tasks.
This survey aims to provide a comprehensive and up-to-date overview of foundation models in robotics, focusing on autonomous manipulation and encompassing high-level planning and low-level control.
arXiv Detail & Related papers (2024-02-04T07:55:01Z) - AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents [109.3804962220498]
AutoRT is a system to scale up the deployment of operational robots in completely unseen scenarios with minimal human supervision.
We demonstrate AutoRT proposing instructions to over 20 robots across multiple buildings and collecting 77k real robot episodes via both teleoperation and autonomous robot policies.
We experimentally show that such "in-the-wild" data collected by AutoRT is significantly more diverse, and that AutoRT's use of LLMs allows for instruction following data collection robots that can align to human preferences.
arXiv Detail & Related papers (2024-01-23T18:45:54Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.
Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.
Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z) - Towards Generalist Robots: A Promising Paradigm via Generative
Simulation [18.704506851738365]
This document serves as a position paper that outlines the authors' vision for a potential pathway towards generalist robots.
The authors believe the proposed paradigm is a feasible path towards accomplishing the long-standing goal of robotics research.
arXiv Detail & Related papers (2023-05-17T02:53:58Z) - RT-1: Robotics Transformer for Real-World Control at Scale [98.09428483862165]
We present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties.
We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks.
arXiv Detail & Related papers (2022-12-13T18:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.