Visual Programming for Text-to-Image Generation and Evaluation
- URL: http://arxiv.org/abs/2305.15328v2
- Date: Fri, 27 Oct 2023 01:44:27 GMT
- Title: Visual Programming for Text-to-Image Generation and Evaluation
- Authors: Jaemin Cho, Abhay Zala, Mohit Bansal
- Abstract summary: We propose two novel interpretable/explainable visual programming frameworks for text-to-image (T2I) generation and evaluation.
First, we introduce VPGen, an interpretable step-by-step T2I generation framework that decomposes T2I generation into three steps: object/count generation, layout generation, and image generation.
Second, we introduce VPEval, an interpretable and explainable evaluation framework for T2I generation based on visual programming.
- Score: 73.12069620086311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As large language models have demonstrated impressive performance in many
domains, recent works have adopted language models (LMs) as controllers of
visual modules for vision-and-language tasks. While existing work focuses on
equipping LMs with visual understanding, we propose two novel
interpretable/explainable visual programming frameworks for text-to-image (T2I)
generation and evaluation. First, we introduce VPGen, an interpretable
step-by-step T2I generation framework that decomposes T2I generation into three
steps: object/count generation, layout generation, and image generation. We
employ an LM to handle the first two steps (object/count generation and layout
generation), by finetuning it on text-layout pairs. Our step-by-step T2I
generation framework provides stronger spatial control than end-to-end models,
the dominant approach for this task. Furthermore, we leverage the world
knowledge of pretrained LMs, overcoming the limitation of previous
layout-guided T2I works that can only handle predefined object classes. We
demonstrate that our VPGen has improved control in counts/spatial
relations/scales of objects than state-of-the-art T2I generation models.
Second, we introduce VPEval, an interpretable and explainable evaluation
framework for T2I generation based on visual programming. Unlike previous T2I
evaluations with a single scoring model that is accurate in some skills but
unreliable in others, VPEval produces evaluation programs that invoke a set of
visual modules that are experts in different skills, and also provides
visual+textual explanations of the evaluation results. Our analysis shows that
VPEval provides a more human-correlated evaluation for skill-specific and
open-ended prompts than widely used single model-based evaluation. We hope that
our work encourages future progress on interpretable/explainable generation and
evaluation for T2I models.
Related papers
- VLEU: a Method for Automatic Evaluation for Generalizability of Text-to-Image Models [18.259733507395634]
We introduce a new metric called Visual Language Evaluation Understudy (VLEU)
VLEU quantifies a model's generalizability by computing the Kullback-Leibler divergence between the marginal distribution of the visual text and the conditional distribution of the images generated by the model.
Our experiments demonstrate the effectiveness of VLEU in evaluating the generalization capability of various T2I models.
arXiv Detail & Related papers (2024-09-23T04:50:36Z) - SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with
Auto-Generated Data [73.23388142296535]
SELMA improves the faithfulness of T2I models by fine-tuning models on automatically generated, multi-skill image-text datasets.
We show that SELMA significantly improves the semantic alignment and text faithfulness of state-of-the-art T2I diffusion models on multiple benchmarks.
We also show that fine-tuning with image-text pairs auto-collected via SELMA shows comparable performance to fine-tuning with ground truth data.
arXiv Detail & Related papers (2024-03-11T17:35:33Z) - L2CEval: Evaluating Language-to-Code Generation Capabilities of Large
Language Models [102.00201523306986]
We present L2CEval, a systematic evaluation of the language-to-code generation capabilities of large language models (LLMs)
We analyze the factors that potentially affect their performance, such as model size, pretraining data, instruction tuning, and different prompting methods.
In addition to assessing model performance, we measure confidence calibration for the models and conduct human evaluations of the output programs.
arXiv Detail & Related papers (2023-09-29T17:57:00Z) - DirecT2V: Large Language Models are Frame-Level Directors for Zero-Shot
Text-to-Video Generation [37.25815760042241]
This paper introduces a new framework, dubbed DirecT2V, to generate text-to-video (T2V) videos.
We equip a diffusion model with a novel value mapping method and dual-softmax filtering, which do not require any additional training.
The experimental results validate the effectiveness of our framework in producing visually coherent and storyful videos.
arXiv Detail & Related papers (2023-05-23T17:57:09Z) - What Makes Data-to-Text Generation Hard for Pretrained Language Models? [17.07349898176898]
Expressing natural language descriptions of structured facts or relations -- data-to-text generation (D2T) -- increases the accessibility of structured knowledge repositories.
Previous work shows that pre-trained language models(PLMs) perform remarkably well on this task after fine-tuning on a significant amount of task-specific training data.
We conduct an empirical study of both fine-tuned and auto-regressive PLMs on the DART multi-domain D2T dataset.
arXiv Detail & Related papers (2022-05-23T17:58:39Z) - ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented
Visual Models [102.63817106363597]
We build ELEVATER, the first benchmark to compare and evaluate pre-trained language-augmented visual models.
It consists of 20 image classification datasets and 35 object detection datasets, each of which is augmented with external knowledge.
We will release our toolkit and evaluation platforms for the research community.
arXiv Detail & Related papers (2022-04-19T10:23:42Z) - DU-VLG: Unifying Vision-and-Language Generation via Dual
Sequence-to-Sequence Pre-training [37.15272352614968]
We propose DU-VLG, a framework which unifies vision-and-language generation as sequence generation problems.
Du-VLG is trained with novel dual pre-training tasks: multi-modal denoising autoencoder tasks and modality translation tasks.
Results show that DU-VLG yields better performance than variants trained with uni-directional generation objectives or the variant without the commitment loss.
arXiv Detail & Related papers (2022-03-17T03:18:22Z) - Enabling Multimodal Generation on CLIP via Vision-Language Knowledge
Distillation [79.72299298976525]
We propose to augment a vision-language pre-training model with a textual pre-trained language model (PLM) via vision-language knowledge distillation (VLKD)
Experiments show that the resulting model has strong zero-shot performance on multimodal generation tasks, such as open-ended visual question answering and image captioning.
The original textual language understanding and generation ability of the PLM is maintained after VLKD, which makes our model versatile for both multimodal and unimodal tasks.
arXiv Detail & Related papers (2022-03-12T09:33:37Z) - KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation [100.79870384880333]
We propose a knowledge-grounded pre-training (KGPT) to generate knowledge-enriched text.
We adopt three settings, namely fully-supervised, zero-shot, few-shot to evaluate its effectiveness.
Under zero-shot setting, our model achieves over 30 ROUGE-L on WebNLG while all other baselines fail.
arXiv Detail & Related papers (2020-10-05T19:59:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.