RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis
- URL: http://arxiv.org/abs/2402.16117v1
- Date: Sun, 25 Feb 2024 15:31:43 GMT
- Title: RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis
- Authors: Yao Mu, Junting Chen, Qinglong Zhang, Shoufa Chen, Qiaojun Yu,
Chongjian Ge, Runjian Chen, Zhixuan Liang, Mengkang Hu, Chaofan Tao, Peize
Sun, Haibao Yu, Chao Yang, Wenqi Shao, Wenhai Wang, Jifeng Dai, Yu Qiao,
Mingyu Ding, Ping Luo
- Abstract summary: We propose a tree-structured multimodal code generation framework for generalized robotic behavior synthesis, termed RoboCodeX.
RoboCodeX decomposes high-level human instructions into multiple object-centric manipulation units consisting of physical preferences such as affordance and safety constraints.
To further enhance the capability to map conceptual and perceptual understanding into control commands, a specialized multimodal reasoning dataset is collected for pre-training and an iterative self-updating methodology is introduced for supervised fine-tuning.
- Score: 102.1876259853457
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robotic behavior synthesis, the problem of understanding multimodal inputs
and generating precise physical control for robots, is an important part of
Embodied AI. Despite successes in applying multimodal large language models for
high-level understanding, it remains challenging to translate these conceptual
understandings into detailed robotic actions while achieving generalization
across various scenarios. In this paper, we propose a tree-structured
multimodal code generation framework for generalized robotic behavior
synthesis, termed RoboCodeX. RoboCodeX decomposes high-level human instructions
into multiple object-centric manipulation units consisting of physical
preferences such as affordance and safety constraints, and applies code
generation to introduce generalization ability across various robotics
platforms. To further enhance the capability to map conceptual and perceptual
understanding into control commands, a specialized multimodal reasoning dataset
is collected for pre-training and an iterative self-updating methodology is
introduced for supervised fine-tuning. Extensive experiments demonstrate that
RoboCodeX achieves state-of-the-art performance in both simulators and real
robots on four different kinds of manipulation tasks and one navigation task.
Related papers
- Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - QUAR-VLA: Vision-Language-Action Model for Quadruped Robots [37.952398683031895]
The central idea is to elevate the overall intelligence of the robot.
We propose QUAdruped Robotic Transformer (QUART), a family of VLA models to integrate visual information and instructions from diverse modalities as input.
Our approach leads to performant robotic policies and enables QUART to obtain a range of emergent capabilities.
arXiv Detail & Related papers (2023-12-22T06:15:03Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Instruction-driven history-aware policies for robotic manipulations [82.25511767738224]
We propose a unified transformer-based approach that takes into account multiple inputs.
In particular, our transformer architecture integrates (i) natural language instructions and (ii) multi-view scene observations.
We evaluate our method on the challenging RLBench benchmark and on a real-world robot.
arXiv Detail & Related papers (2022-09-11T16:28:25Z) - MetaMorph: Learning Universal Controllers with Transformers [45.478223199658785]
In robotics we primarily train a single robot for a single task.
modular robot systems now allow for the flexible combination of general-purpose building blocks into task optimized morphologies.
We propose MetaMorph, a Transformer based approach to learn a universal controller over a modular robot design space.
arXiv Detail & Related papers (2022-03-22T17:58:31Z) - Manipulation of Articulated Objects using Dual-arm Robots via Answer Set
Programming [10.316694915810947]
The manipulation of articulated objects is of primary importance in Robotics, and can be considered as one of the most complex manipulation tasks.
Traditionally, this problem has been tackled by developing ad-hoc approaches, which lack flexibility and portability.
We present a framework based on Answer Set Programming (ASP) for the automated manipulation of articulated objects in a robot control architecture.
arXiv Detail & Related papers (2020-10-02T18:50:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.