Bridging Different Language Models and Generative Vision Models for
Text-to-Image Generation
- URL: http://arxiv.org/abs/2403.07860v1
- Date: Tue, 12 Mar 2024 17:50:11 GMT
- Title: Bridging Different Language Models and Generative Vision Models for
Text-to-Image Generation
- Authors: Shihao Zhao, Shaozhe Hao, Bojia Zi, Huaizhe Xu, Kwan-Yee K. Wong
- Abstract summary: We propose LaVi-Bridge, a pipeline that enables the integration of diverse pre-trained language models and generative vision models for text-to-image generation.
Our pipeline is compatible with various language models and generative vision models, accommodating different structures.
- Score: 12.024554708901514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text-to-image generation has made significant advancements with the
introduction of text-to-image diffusion models. These models typically consist
of a language model that interprets user prompts and a vision model that
generates corresponding images. As language and vision models continue to
progress in their respective domains, there is a great potential in exploring
the replacement of components in text-to-image diffusion models with more
advanced counterparts. A broader research objective would therefore be to
investigate the integration of any two unrelated language and generative vision
models for text-to-image generation. In this paper, we explore this objective
and propose LaVi-Bridge, a pipeline that enables the integration of diverse
pre-trained language models and generative vision models for text-to-image
generation. By leveraging LoRA and adapters, LaVi-Bridge offers a flexible and
plug-and-play approach without requiring modifications to the original weights
of the language and vision models. Our pipeline is compatible with various
language models and generative vision models, accommodating different
structures. Within this framework, we demonstrate that incorporating superior
modules, such as more advanced language models or generative vision models,
results in notable improvements in capabilities like text alignment or image
quality. Extensive evaluations have been conducted to verify the effectiveness
of LaVi-Bridge. Code is available at
https://github.com/ShihaoZhaoZSH/LaVi-Bridge.
Related papers
- Elucidating the design space of language models for image generation [13.96798987912677]
We show that image tokens exhibit greater randomness compared to text tokens, which presents challenges when training with token prediction.
Our analysis also reveals that while all models successfully grasp the importance of local information in image generation, smaller models struggle to capture the global context.
Our work is the first to analyze the optimization behavior of language models in vision generation, and we believe it can inspire more effective designs when applying LMs to other domains.
arXiv Detail & Related papers (2024-10-21T17:57:04Z) - Multimodal Large Language Model is a Human-Aligned Annotator for Text-to-Image Generation [87.50120181861362]
VisionPrefer is a high-quality and fine-grained preference dataset that captures multiple preference aspects.
We train a reward model VP-Score over VisionPrefer to guide the training of text-to-image generative models and the preference prediction accuracy of VP-Score is comparable to human annotators.
arXiv Detail & Related papers (2024-04-23T14:53:15Z) - VLIS: Unimodal Language Models Guide Multimodal Language Generation [23.094728230459125]
We introduce Visual-Language models as Importance Sampling weights (VLIS)
It combines the visual conditioning capability of vision-language models with the language understanding of unimodal text-only language models without further training.
VLIS improves vision-language models on diverse tasks, including commonsense understanding and complex text generation.
arXiv Detail & Related papers (2023-10-15T07:58:52Z) - Unified Language-Vision Pretraining in LLM with Dynamic Discrete Visual Tokenization [52.935150075484074]
We introduce a well-designed visual tokenizer to translate the non-linguistic image into a sequence of discrete tokens like a foreign language.
The resulting visual tokens encompass high-level semantics worthy of a word and also support dynamic sequence length varying from the image.
This unification empowers LaVIT to serve as an impressive generalist interface to understand and generate multi-modal content simultaneously.
arXiv Detail & Related papers (2023-09-09T03:01:38Z) - RenAIssance: A Survey into AI Text-to-Image Generation in the Era of
Large Model [93.8067369210696]
Text-to-image generation (TTI) refers to the usage of models that could process text input and generate high fidelity images based on text descriptions.
Diffusion models are one prominent type of generative model used for the generation of images through the systematic introduction of noises with repeating steps.
In the era of large models, scaling up model size and the integration with large language models have further improved the performance of TTI models.
arXiv Detail & Related papers (2023-09-02T03:27:20Z) - Generating Images with Multimodal Language Models [78.6660334861137]
We propose a method to fuse frozen text-only large language models with pre-trained image encoder and decoder models.
Our model demonstrates a wide suite of multimodal capabilities: image retrieval, novel image generation, and multimodal dialogue.
arXiv Detail & Related papers (2023-05-26T19:22:03Z) - On Advances in Text Generation from Images Beyond Captioning: A Case
Study in Self-Rationalization [89.94078728495423]
We show that recent advances in each modality, CLIP image representations and scaling of language models, do not consistently improve multimodal self-rationalization of tasks with multimodal inputs.
Our findings call for a backbone modelling approach that can be built on to advance text generation from images and text beyond image captioning.
arXiv Detail & Related papers (2022-05-24T00:52:40Z) - Visual Conceptual Blending with Large-scale Language and Vision Models [54.251383721475655]
We generate a single-sentence description of the blend of the two using a language model.
We generate a visual depiction of the blend using a text-based image generation model.
arXiv Detail & Related papers (2021-06-27T02:48:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.