Instruction-Following Agents with Multimodal Transformer
- URL: http://arxiv.org/abs/2210.13431v4
- Date: Sat, 25 Mar 2023 21:36:36 GMT
- Title: Instruction-Following Agents with Multimodal Transformer
- Authors: Hao Liu, Lisa Lee, Kimin Lee, Pieter Abbeel
- Abstract summary: We propose a simple yet effective model for robots to solve instruction-following tasks in vision-based environments.
Our method consists of a multimodal transformer that encodes visual observations and language instructions.
We show that this unified transformer model outperforms all state-of-the-art pre-trained or trained-from-scratch methods in both single-task and multi-task settings.
- Score: 95.70039658112873
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans are excellent at understanding language and vision to accomplish a
wide range of tasks. In contrast, creating general instruction-following
embodied agents remains a difficult challenge. Prior work that uses pure
language-only models lack visual grounding, making it difficult to connect
language instructions with visual observations. On the other hand, methods that
use pre-trained multimodal models typically come with divided language and
visual representations, requiring designing specialized network architecture to
fuse them together. We propose a simple yet effective model for robots to solve
instruction-following tasks in vision-based environments. Our \ours method
consists of a multimodal transformer that encodes visual observations and
language instructions, and a transformer-based policy that predicts actions
based on encoded representations. The multimodal transformer is pre-trained on
millions of image-text pairs and natural language text, thereby producing
generic cross-modal representations of observations and instructions. The
transformer-based policy keeps track of the full history of observations and
actions, and predicts actions autoregressively. Despite its simplicity, we show
that this unified transformer model outperforms all state-of-the-art
pre-trained or trained-from-scratch methods in both single-task and multi-task
settings. Our model also shows better model scalability and generalization
ability than prior work.
Related papers
- TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wild [102.93338424976959]
We introduce TextBind, an almost annotation-free framework for empowering larger language models with the multi-turn interleaved instruction-following capabilities.
Our approach requires only image-caption pairs and generates multi-turn multimodal instruction-response conversations from a language model.
To accommodate interleaved image-text inputs and outputs, we devise MIM, a language model-centric architecture that seamlessly integrates image encoder and decoder models.
arXiv Detail & Related papers (2023-09-14T15:34:01Z) - Vision Language Transformers: A Survey [0.9137554315375919]
Vision language tasks, such as answering questions about or generating captions that describe an image, are difficult tasks for computers to perform.
Recent research has adapted the pretrained transformer architecture introduced in citetvaswani 2017attention to vision language modeling.
Transformer models have greatly improved performance and versatility over previous vision language models.
arXiv Detail & Related papers (2023-07-06T19:08:56Z) - PaLM-E: An Embodied Multimodal Language Model [101.29116156731762]
We propose embodied language models to incorporate real-world continuous sensor modalities into language models.
We train these encodings end-to-end, in conjunction with a pre-trained large language model, for multiple embodied tasks.
Our largest model, PaLM-E-562B with 562B parameters, is a visual-language generalist with state-of-the-art performance on OK-VQA.
arXiv Detail & Related papers (2023-03-06T18:58:06Z) - VIMA: General Robot Manipulation with Multimodal Prompts [82.01214865117637]
We show that a wide spectrum of robot manipulation tasks can be expressed with multimodal prompts.
We develop a new simulation benchmark that consists of thousands of procedurally-generated tabletop tasks.
We design a transformer-based robot agent, VIMA, that processes these prompts and outputs motor actions autoregressively.
arXiv Detail & Related papers (2022-10-06T17:50:11Z) - Instruction-driven history-aware policies for robotic manipulations [82.25511767738224]
We propose a unified transformer-based approach that takes into account multiple inputs.
In particular, our transformer architecture integrates (i) natural language instructions and (ii) multi-view scene observations.
We evaluate our method on the challenging RLBench benchmark and on a real-world robot.
arXiv Detail & Related papers (2022-09-11T16:28:25Z) - Pre-training image-language transformers for open-vocabulary tasks [53.446599611203474]
We present a pre-training approach for vision and language transformer models, which is based on a mixture of diverse tasks.
We explore both the use of image-text captioning data in pre-training, which does not need additional supervision, as well as object-aware strategies to pre-train the model.
We evaluate the method on a number of textgenerative vision+language tasks, such as Visual Question Answering, visual entailment and captioning, and demonstrate large gains over standard pre-training methods.
arXiv Detail & Related papers (2022-09-09T16:11:11Z) - Reshaping Robot Trajectories Using Natural Language Commands: A Study of
Multi-Modal Data Alignment Using Transformers [33.7939079214046]
We provide a flexible language-based interface for human-robot collaboration.
We take advantage of recent advancements in the field of large language models to encode the user command.
We train the model using imitation learning over a dataset containing robot trajectories modified by language commands.
arXiv Detail & Related papers (2022-03-25T01:36:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.