Video as the New Language for Real-World Decision Making
- URL: http://arxiv.org/abs/2402.17139v1
- Date: Tue, 27 Feb 2024 02:05:29 GMT
- Title: Video as the New Language for Real-World Decision Making
- Authors: Sherry Yang, Jacob Walker, Jack Parker-Holder, Yilun Du, Jake Bruce,
Andre Barreto, Pieter Abbeel, Dale Schuurmans
- Abstract summary: Video data captures important information about the physical world that is difficult to express in language.
Video can serve as a unified interface that can absorb internet knowledge and represent diverse tasks.
We identify major impact opportunities in domains such as robotics, self-driving, and science.
- Score: 100.68643056416394
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Both text and video data are abundant on the internet and support large-scale
self-supervised learning through next token or frame prediction. However, they
have not been equally leveraged: language models have had significant
real-world impact, whereas video generation has remained largely limited to
media entertainment. Yet video data captures important information about the
physical world that is difficult to express in language. To address this gap,
we discuss an under-appreciated opportunity to extend video generation to solve
tasks in the real world. We observe how, akin to language, video can serve as a
unified interface that can absorb internet knowledge and represent diverse
tasks. Moreover, we demonstrate how, like language models, video generation can
serve as planners, agents, compute engines, and environment simulators through
techniques such as in-context learning, planning and reinforcement learning. We
identify major impact opportunities in domains such as robotics, self-driving,
and science, supported by recent work that demonstrates how such advanced
capabilities in video generation are plausibly within reach. Lastly, we
identify key challenges in video generation that mitigate progress. Addressing
these challenges will enable video generation models to demonstrate unique
value alongside language models in a wider array of AI applications.
Related papers
- Towards Holistic Language-video Representation: the language model-enhanced MSR-Video to Text Dataset [4.452729255042396]
A more robust and holistic language-video representation is the key to pushing video understanding forward.
The current plain and simple text descriptions and the visual-only focus for the language-video tasks result in a limited capacity in real-world natural language video retrieval tasks.
This paper introduces a method to automatically enhance video-language datasets, making them more modality and context-aware.
arXiv Detail & Related papers (2024-06-19T20:16:17Z) - Towards Multi-Task Multi-Modal Models: A Video Generative Perspective [5.495245220300184]
This thesis chronicles our endeavor to build multi-task models for generating videos and other modalities under diverse conditions.
We unveil a novel approach to mapping bidirectionally between visual observation and interpretable lexical terms.
Our scalable visual token representation proves beneficial across generation, compression, and understanding tasks.
arXiv Detail & Related papers (2024-05-26T23:56:45Z) - A Survey on Generative AI and LLM for Video Generation, Understanding, and Streaming [26.082980156232086]
Top-trending AI technologies, i.e., generative artificial intelligence (Generative AI) and large language models (LLMs), are reshaping the field of video technology.
The paper highlights the innovative use of these technologies in producing highly realistic videos.
In the realm of video streaming, the paper discusses how LLMs contribute to more efficient and user-centric streaming experiences.
arXiv Detail & Related papers (2024-01-30T14:37:10Z) - Learning to Model the World with Language [100.76069091703505]
To interact with humans and act in the world, agents need to understand the range of language that people use and relate it to the visual world.
Our key idea is that agents should interpret such diverse language as a signal that helps them predict the future.
We instantiate this in Dynalang, an agent that learns a multimodal world model to predict future text and image representations.
arXiv Detail & Related papers (2023-07-31T17:57:49Z) - InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding
and Generation [90.71796406228265]
InternVid is a large-scale video-centric multimodal dataset that enables learning powerful and transferable video-text representations.
The InternVid dataset contains over 7 million videos lasting nearly 760K hours, yielding 234M video clips accompanied by detailed descriptions of total 4.1B words.
arXiv Detail & Related papers (2023-07-13T17:58:32Z) - A Video Is Worth 4096 Tokens: Verbalize Videos To Understand Them In
Zero Shot [67.00455874279383]
We propose verbalizing long videos to generate descriptions in natural language, then performing video-understanding tasks on the generated story as opposed to the original video.
Our method, despite being zero-shot, achieves significantly better results than supervised baselines for video understanding.
To alleviate a lack of story understanding benchmarks, we publicly release the first dataset on a crucial task in computational social science on persuasion strategy identification.
arXiv Detail & Related papers (2023-05-16T19:13:11Z) - Video Generation from Text Employing Latent Path Construction for
Temporal Modeling [70.06508219998778]
Video generation is one of the most challenging tasks in Machine Learning and Computer Vision fields of study.
In this paper, we tackle the text to video generation problem, which is a conditional form of video generation.
We believe that video generation from natural language sentences will have an important impact on Artificial Intelligence.
arXiv Detail & Related papers (2021-07-29T06:28:20Z) - VidLanKD: Improving Language Understanding via Video-Distilled Knowledge
Transfer [76.3906723777229]
We present VidLanKD, a video-language knowledge distillation method for improving language understanding.
We train a multi-modal teacher model on a video-text dataset, and then transfer its knowledge to a student language model with a text dataset.
In our experiments, VidLanKD achieves consistent improvements over text-only language models and vokenization models.
arXiv Detail & Related papers (2021-07-06T15:41:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.