Generating Images with Multimodal Language Models
- URL: http://arxiv.org/abs/2305.17216v3
- Date: Fri, 13 Oct 2023 15:35:42 GMT
- Title: Generating Images with Multimodal Language Models
- Authors: Jing Yu Koh, Daniel Fried, Ruslan Salakhutdinov
- Abstract summary: We propose a method to fuse frozen text-only large language models with pre-trained image encoder and decoder models.
Our model demonstrates a wide suite of multimodal capabilities: image retrieval, novel image generation, and multimodal dialogue.
- Score: 78.6660334861137
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method to fuse frozen text-only large language models (LLMs)
with pre-trained image encoder and decoder models, by mapping between their
embedding spaces. Our model demonstrates a wide suite of multimodal
capabilities: image retrieval, novel image generation, and multimodal dialogue.
Ours is the first approach capable of conditioning on arbitrarily interleaved
image and text inputs to generate coherent image (and text) outputs. To achieve
strong performance on image generation, we propose an efficient mapping network
to ground the LLM to an off-the-shelf text-to-image generation model. This
mapping network translates hidden representations of text into the embedding
space of the visual models, enabling us to leverage the strong text
representations of the LLM for visual outputs. Our approach outperforms
baseline generation models on tasks with longer and more complex language. In
addition to novel image generation, our model is also capable of image
retrieval from a prespecified dataset, and decides whether to retrieve or
generate at inference time. This is done with a learnt decision module which
conditions on the hidden representations of the LLM. Our model exhibits a wider
range of capabilities compared to prior multimodal language models. It can
process image-and-text inputs, and produce retrieved images, generated images,
and generated text -- outperforming non-LLM based generation models across
several text-to-image tasks that measure context dependence.
Related papers
- AdaptVision: Dynamic Input Scaling in MLLMs for Versatile Scene Understanding [96.01726275876548]
We present AdaptVision, a multimodal large language model specifically designed to dynamically process input images at varying resolutions.
We devise a dynamic image partitioning module that adjusts the number of visual tokens according to the size and aspect ratio of images.
Our model is capable of processing images with resolutions up to $1008times 1008$.
arXiv Detail & Related papers (2024-08-30T03:16:49Z) - An End-to-End Model for Photo-Sharing Multi-modal Dialogue Generation [43.139415423751615]
Photo-sharing multi-modal dialogue generation requires a dialogue agent not only to generate text responses but also to share photos at the proper moment.
A pipeline model integrates an image caption model, a text generation model, and an image generation model to handle this complex multi-modal task.
We propose the first end-to-end model for photo-sharing multi-modal dialogue generation, which integrates an image perceptron and an image generator with a large language model.
arXiv Detail & Related papers (2024-08-16T10:33:19Z) - An Empirical Study and Analysis of Text-to-Image Generation Using Large Language Model-Powered Textual Representation [21.154973705998945]
Existing methods leverage the text encoder of the CLIP model to represent input prompts.
Large Language Models (LLMs) offer multilingual input, accommodate longer context, and achieve superior text representation.
We propose a lightweight adapter that enables fast training of the text-to-image model using the textual representations from LLMs.
arXiv Detail & Related papers (2024-05-21T16:35:02Z) - MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer [106.79844459065828]
This paper presents MM-Interleaved, an end-to-end generative model for interleaved image-text data.
It introduces a multi-scale and multi-image feature synchronizer module, allowing direct access to fine-grained image features in the previous context.
Experiments demonstrate the versatility of MM-Interleaved in recognizing visual details following multi-modal instructions and generating consistent images following both textual and visual conditions.
arXiv Detail & Related papers (2024-01-18T18:50:16Z) - LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image
Diffusion Models with Large Language Models [62.75006608940132]
This work proposes to enhance prompt understanding capabilities in text-to-image diffusion models.
Our method leverages a pretrained large language model for grounded generation in a novel two-stage process.
Our method significantly outperforms the base diffusion model and several strong baselines in accurately generating images.
arXiv Detail & Related papers (2023-05-23T03:59:06Z) - Grounding Language Models to Images for Multimodal Inputs and Outputs [89.30027812161686]
We propose an efficient method to ground pretrained text-only language models to the visual domain.
We process arbitrarily interleaved image-and-text data, and generate text interleaved with retrieved images.
arXiv Detail & Related papers (2023-01-31T18:33:44Z) - M-VADER: A Model for Diffusion with Multimodal Context [0.786460153386845]
We show how M-VADER enables the generation of images specified using combinations of image and text.
We introduce an embedding model closely related to a vision-language model.
arXiv Detail & Related papers (2022-12-06T12:45:21Z) - On Advances in Text Generation from Images Beyond Captioning: A Case
Study in Self-Rationalization [89.94078728495423]
We show that recent advances in each modality, CLIP image representations and scaling of language models, do not consistently improve multimodal self-rationalization of tasks with multimodal inputs.
Our findings call for a backbone modelling approach that can be built on to advance text generation from images and text beyond image captioning.
arXiv Detail & Related papers (2022-05-24T00:52:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.