LIV: Language-Image Representations and Rewards for Robotic Control
- URL: http://arxiv.org/abs/2306.00958v1
- Date: Thu, 1 Jun 2023 17:52:23 GMT
- Title: LIV: Language-Image Representations and Rewards for Robotic Control
- Authors: Yecheng Jason Ma, William Liang, Vaidehi Som, Vikash Kumar, Amy Zhang,
Osbert Bastani, Dinesh Jayaraman
- Abstract summary: We present a unified objective for vision-language representation and reward learning from action-free videos with text annotations.
We use LIV to pre-train the first control-centric vision-language representation from large human video datasets such as EpicKitchen.
Our results validate the advantages of joint vision-language representation and reward learning within the unified, compact LIV framework.
- Score: 37.12560985663822
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Language-Image Value learning (LIV), a unified objective for
vision-language representation and reward learning from action-free videos with
text annotations. Exploiting a novel connection between dual reinforcement
learning and mutual information contrastive learning, the LIV objective trains
a multi-modal representation that implicitly encodes a universal value function
for tasks specified as language or image goals. We use LIV to pre-train the
first control-centric vision-language representation from large human video
datasets such as EpicKitchen. Given only a language or image goal, the
pre-trained LIV model can assign dense rewards to each frame in videos of
unseen robots or humans attempting that task in unseen environments. Further,
when some target domain-specific data is available, the same objective can be
used to fine-tune and improve LIV and even other pre-trained representations
for robotic control and reward specification in that domain. In our experiments
on several simulated and real-world robot environments, LIV models consistently
outperform the best prior input state representations for imitation learning,
as well as reward specification methods for policy synthesis. Our results
validate the advantages of joint vision-language representation and reward
learning within the unified, compact LIV framework.
Related papers
- KALIE: Fine-Tuning Vision-Language Models for Open-World Manipulation without Robot Data [45.25288643161976]
We propose Keypoint Affordance Learning from Imagined Environments (KALIE) for robotic control in a scalable manner.
Instead of directly producing motor commands, KALIE controls the robot by predicting point-based affordance representations.
We demonstrate that KALIE can learn to robustly solve new manipulation tasks with unseen objects given only 50 example data points.
arXiv Detail & Related papers (2024-09-21T08:45:16Z) - Adapt2Reward: Adapting Video-Language Models to Generalizable Robotic Rewards via Failure Prompts [21.249837293326497]
Generalizable reward function is central to reinforcement learning and planning for robots.
This paper transfers video-language models with robust generalization into a language-conditioned reward function.
Our model shows outstanding generalization to new environments and new instructions for robot planning and reinforcement learning.
arXiv Detail & Related papers (2024-07-20T13:22:59Z) - Video-Language Critic: Transferable Reward Functions for Language-Conditioned Robotics [25.2461925479135]
Video-Language Critic is a reward model that can be trained on readily available cross-embodiment data.
Our model enables 2x more sample-efficient policy training on Meta-World tasks than a sparse reward only.
arXiv Detail & Related papers (2024-05-30T12:18:06Z) - RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic
Control [140.48218261864153]
We study how vision-language models trained on Internet-scale data can be incorporated directly into end-to-end robotic control.
Our approach leads to performant robotic policies and enables RT-2 to obtain a range of emergent capabilities from Internet-scale training.
arXiv Detail & Related papers (2023-07-28T21:18:02Z) - PaLM-E: An Embodied Multimodal Language Model [101.29116156731762]
We propose embodied language models to incorporate real-world continuous sensor modalities into language models.
We train these encodings end-to-end, in conjunction with a pre-trained large language model, for multiple embodied tasks.
Our largest model, PaLM-E-562B with 562B parameters, is a visual-language generalist with state-of-the-art performance on OK-VQA.
arXiv Detail & Related papers (2023-03-06T18:58:06Z) - Language-Driven Representation Learning for Robotics [115.93273609767145]
Recent work in visual representation learning for robotics demonstrates the viability of learning from large video datasets of humans performing everyday tasks.
We introduce a framework for language-driven representation learning from human videos and captions.
We find that Voltron's language-driven learning outperform the prior-of-the-art, especially on targeted problems requiring higher-level control.
arXiv Detail & Related papers (2023-02-24T17:29:31Z) - Pre-Trained Language Models for Interactive Decision-Making [72.77825666035203]
We describe a framework for imitation learning in which goals and observations are represented as a sequence of embeddings.
We demonstrate that this framework enables effective generalization across different environments.
For test tasks involving novel goals or novel scenes, initializing policies with language models improves task completion rates by 43.6%.
arXiv Detail & Related papers (2022-02-03T18:55:52Z) - Language Model-Based Paired Variational Autoencoders for Robotic Language Learning [18.851256771007748]
Similar to human infants, artificial agents can learn language while interacting with their environment.
We present a neural model that bidirectionally binds robot actions and their language descriptions in a simple object manipulation scenario.
Next, we introduce PVAE-BERT, which equips the model with a pretrained large-scale language model.
arXiv Detail & Related papers (2022-01-17T10:05:26Z) - Align before Fuse: Vision and Language Representation Learning with
Momentum Distillation [52.40490994871753]
We introduce a contrastive loss to representations BEfore Fusing (ALBEF) through cross-modal attention.
We propose momentum distillation, a self-training method which learns from pseudo-targets produced by a momentum model.
ALBEF achieves state-of-the-art performance on multiple downstream vision-language tasks.
arXiv Detail & Related papers (2021-07-16T00:19:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.