VX2TEXT: End-to-End Learning of Video-Based Text Generation From
Multimodal Inputs
- URL: http://arxiv.org/abs/2101.12059v2
- Date: Fri, 29 Jan 2021 23:21:48 GMT
- Title: VX2TEXT: End-to-End Learning of Video-Based Text Generation From
Multimodal Inputs
- Authors: Xudong Lin, Gedas Bertasius, Jue Wang, Shih-Fu Chang, Devi Parikh,
Lorenzo Torresani
- Abstract summary: We present a framework for text generation from multimodal inputs consisting of video plus text, speech, or audio.
Experiments demonstrate that our approach based on a single architecture outperforms the state-of-the-art on three video-based text-generation tasks.
- Score: 103.99315770490163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present \textsc{Vx2Text}, a framework for text generation from multimodal
inputs consisting of video plus text, speech, or audio. In order to leverage
transformer networks, which have been shown to be effective at modeling
language, each modality is first converted into a set of language embeddings by
a learnable tokenizer. This allows our approach to perform multimodal fusion in
the language space, thus eliminating the need for ad-hoc cross-modal fusion
modules. To address the non-differentiability of tokenization on continuous
inputs (e.g., video or audio), we utilize a relaxation scheme that enables
end-to-end training. Furthermore, unlike prior encoder-only models, our network
includes an autoregressive decoder to generate open-ended text from the
multimodal embeddings fused by the language encoder. This renders our approach
fully generative and makes it directly applicable to different "video+$x$ to
text" problems without the need to design specialized network heads for each
task. The proposed framework is not only conceptually simple but also
remarkably effective: experiments demonstrate that our approach based on a
single architecture outperforms the state-of-the-art on three video-based
text-generation tasks -- captioning, question answering and audio-visual
scene-aware dialog.
Related papers
- VIMI: Grounding Video Generation through Multi-modal Instruction [89.90065445082442]
Existing text-to-video diffusion models rely solely on text-only encoders for their pretraining.
We construct a large-scale multimodal prompt dataset by employing retrieval methods to pair in-context examples with the given text prompts.
We finetune the model from the first stage on three video generation tasks, incorporating multi-modal instructions.
arXiv Detail & Related papers (2024-07-08T18:12:49Z) - Emu: Generative Pretraining in Multimodality [43.759593451544546]
Transformer-based multimodal foundation model can seamlessly generate images and texts in multimodal context.
Emu can serve as a generalist multimodal interface for both image-to-text and text-to-image tasks.
Emu demonstrates superb performance compared to state-of-the-art large multimodal models.
arXiv Detail & Related papers (2023-07-11T12:45:39Z) - VioLA: Unified Codec Language Models for Speech Recognition, Synthesis,
and Translation [91.39949385661379]
VioLA is a single auto-regressive Transformer decoder-only network that unifies various cross-modal tasks involving speech and text.
We first convert all the speech utterances to discrete tokens using an offline neural encoder.
We further integrate task IDs (TID) and language IDs (LID) into the proposed model to enhance the modeling capability of handling different languages and tasks.
arXiv Detail & Related papers (2023-05-25T14:39:47Z) - i-Code V2: An Autoregressive Generation Framework over Vision, Language,
and Speech Data [101.52821120195975]
i-Code V2 is first model capable of generating natural language from any combination of Vision, Language, and Speech data.
System is pretrained end-to-end on a large collection of dual- and single-modality datasets.
arXiv Detail & Related papers (2023-05-21T01:25:44Z) - Tagging before Alignment: Integrating Multi-Modal Tags for Video-Text
Retrieval [23.418120617544545]
Vision-language alignment learning for video-text retrieval arouses a lot of attention in recent years.
In this paper, we integrate multi-modal information in an explicit manner by tagging, and use the tags as the anchors for better video-text alignment.
To strengthen the interaction between video and text, we build a joint cross-modal encoder with the triplet input of [vision, tag, text] and perform two additional supervised tasks.
arXiv Detail & Related papers (2023-01-30T03:53:19Z) - Towards Fast Adaptation of Pretrained Contrastive Models for
Multi-channel Video-Language Retrieval [70.30052749168013]
Multi-channel video-language retrieval require models to understand information from different channels.
contrastive multimodal models are shown to be highly effective at aligning entities in images/videos and text.
There is not a clear way to quickly adapt these two lines to multi-channel video-language retrieval with limited data and resources.
arXiv Detail & Related papers (2022-06-05T01:43:52Z) - All in One: Exploring Unified Video-Language Pre-training [44.22059872694995]
We introduce an end-to-end video-language model, namely textitall-in-one Transformer, that embeds raw video and textual signals into joint representations.
The code and pretrained model have been released in https://github.com/showlab/all-in-one.
arXiv Detail & Related papers (2022-03-14T17:06:30Z) - Align and Prompt: Video-and-Language Pre-training with Entity Prompts [111.23364631136339]
Video-and-language pre-training has shown promising improvements on various downstream tasks.
We propose Align and Prompt: an efficient and effective video-and-language pre-training framework with better cross-modal alignment.
Our code and pre-trained models will be released.
arXiv Detail & Related papers (2021-12-17T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.