VL-GPT: A Generative Pre-trained Transformer for Vision and Language
Understanding and Generation
- URL: http://arxiv.org/abs/2312.09251v1
- Date: Thu, 14 Dec 2023 18:59:43 GMT
- Title: VL-GPT: A Generative Pre-trained Transformer for Vision and Language
Understanding and Generation
- Authors: Jinguo Zhu, Xiaohan Ding, Yixiao Ge, Yuying Ge, Sijie Zhao, Hengshuang
Zhao, Xiaohua Wang, Ying Shan
- Abstract summary: We introduce Vision-Language Generative Pre-trained Transformer (VL-GPT), a transformer model proficient at concurrently perceiving and generating visual and linguistic data.
VL-GPT achieves a unified pre-training approach for both image and text modalities by employing a straightforward auto-regressive objective.
- Score: 79.02357561313785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we introduce Vision-Language Generative Pre-trained Transformer
(VL-GPT), a transformer model proficient at concurrently perceiving and
generating visual and linguistic data. VL-GPT achieves a unified pre-training
approach for both image and text modalities by employing a straightforward
auto-regressive objective, thereby enabling the model to process image and text
as seamlessly as a language model processes text. To accomplish this, we
initially propose a novel image tokenizer-detokenizer framework for visual
data, specifically designed to transform raw images into a sequence of
continuous embeddings and reconstruct them accordingly. In combination with the
existing text tokenizer and detokenizer, this framework allows for the encoding
of interleaved image-text data into a multimodal sequence, which can
subsequently be fed into the transformer model. Consequently, VL-GPT can
perform large-scale pre-training on multimodal corpora utilizing a unified
auto-regressive objective (i.e., next-token prediction). Upon completion of
pre-training, VL-GPT exhibits remarkable zero-shot and few-shot performance
across a diverse range of vision and language understanding and generation
tasks, including image captioning, visual question answering, text-to-image
generation, and more. Additionally, the pre-trained model retrains in-context
learning capabilities when provided with multimodal prompts. We further conduct
instruction tuning on our VL-GPT, highlighting its exceptional potential for
multimodal assistance. The source code and model weights shall be released.
Related papers
- Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretraining [48.98105914356609]
Lumina-mGPT is a family of multimodal autoregressive models capable of various vision and language tasks.
We introduce Ominiponent Supervised Finetuning, transforming Lumina-mGPT into a foundation model that seamlessly achieves omnipotent task unification.
arXiv Detail & Related papers (2024-08-05T17:46:53Z) - MAGVLT: Masked Generative Vision-and-Language Transformer [15.796199345773879]
We explore a unified generative vision-and-language model that can produce both images and text sequences.
We propose a generative VL transformer based on the non-autoregressive mask prediction, named MAGVLT, and compare it with an autoregressive generative VL transformer (ARGVLT)
For rigorous training of our MAGVLT with image-text pairs from scratch, we combine the image-to-text, text-to-image, and joint image-and-text mask prediction tasks.
arXiv Detail & Related papers (2023-03-21T21:49:39Z) - OmniVL:One Foundation Model for Image-Language and Video-Language Tasks [117.57580168859512]
We present OmniVL, a new foundation model to support both image-language and video-language tasks using one universal architecture.
We demonstrate, for the first time, such a paradigm benefits both image and video tasks, as opposed to the conventional one-directional transfer.
We introduce a novel unified vision-language contrastive (UniVLC) loss to leverage image-text, video-text, image-label (e.g., image classification), video-label (e.g., video action recognition) data together.
arXiv Detail & Related papers (2022-09-15T17:59:59Z) - VL-BEiT: Generative Vision-Language Pretraining [107.25298505511184]
We introduce a vision-language foundation model called VL-BEiT, which is a bidirectional multimodal Transformer learned by generative pretraining.
Specifically, we perform masked vision-language modeling on image-text pairs, masked language modeling on texts, and masked image modeling on images.
arXiv Detail & Related papers (2022-06-02T16:14:19Z) - Enabling Multimodal Generation on CLIP via Vision-Language Knowledge
Distillation [79.72299298976525]
We propose to augment a vision-language pre-training model with a textual pre-trained language model (PLM) via vision-language knowledge distillation (VLKD)
Experiments show that the resulting model has strong zero-shot performance on multimodal generation tasks, such as open-ended visual question answering and image captioning.
The original textual language understanding and generation ability of the PLM is maintained after VLKD, which makes our model versatile for both multimodal and unimodal tasks.
arXiv Detail & Related papers (2022-03-12T09:33:37Z) - Unsupervised Vision-and-Language Pre-training via Retrieval-based
Multi-Granular Alignment [66.77841319057299]
We propose a novel unsupervised Vision-and-Language pre-training curriculum for non-parallel texts and images.
We first construct a weakly aligned image-text corpus via a retrieval-based approach, then apply a set of multi-granular alignment pre-training tasks.
A comprehensive ablation study shows each granularity is helpful to learn a stronger pre-trained model.
arXiv Detail & Related papers (2022-03-01T05:34:01Z) - E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual
Learning [31.622393984150314]
We propose the first end-to-end vision-language pre-trained model for both V+L understanding and generation.
We build a unified Transformer framework to jointly learn visual representation, and semantic alignments between image and text.
arXiv Detail & Related papers (2021-06-03T12:50:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.