Learning the Visualness of Text Using Large Vision-Language Models
- URL: http://arxiv.org/abs/2305.10434v2
- Date: Sun, 22 Oct 2023 19:06:01 GMT
- Title: Learning the Visualness of Text Using Large Vision-Language Models
- Authors: Gaurav Verma, Ryan A. Rossi, Christopher Tensmeyer, Jiuxiang Gu, Ani
Nenkova
- Abstract summary: Visual text evokes an image in a person's mind, while non-visual text fails to do so.
A method to automatically detect visualness in text will enable text-to-image retrieval and generation models to augment text with relevant images.
We curate a dataset of 3,620 English sentences and their visualness scores provided by multiple human annotators.
- Score: 42.75864384249245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual text evokes an image in a person's mind, while non-visual text fails
to do so. A method to automatically detect visualness in text will enable
text-to-image retrieval and generation models to augment text with relevant
images. This is particularly challenging with long-form text as text-to-image
generation and retrieval models are often triggered for text that is designed
to be explicitly visual in nature, whereas long-form text could contain many
non-visual sentences. To this end, we curate a dataset of 3,620 English
sentences and their visualness scores provided by multiple human annotators. We
also propose a fine-tuning strategy that adapts large vision-language models
like CLIP by modifying the model's contrastive learning objective to map text
identified as non-visual to a common NULL image while matching visual text to
their corresponding images in the document. We evaluate the proposed approach
on its ability to (i) classify visual and non-visual text accurately, and (ii)
attend over words that are identified as visual in psycholinguistic studies.
Empirical evaluation indicates that our approach performs better than several
heuristics and baseline models for the proposed task. Furthermore, to highlight
the importance of modeling the visualness of text, we conduct qualitative
analyses of text-to-image generation systems like DALL-E. Project webpage:
https://gaurav22verma.github.io/text-visualness/
Related papers
- Enhancing Vision Models for Text-Heavy Content Understanding and Interaction [0.0]
We build a visual chat application integrating CLIP for image encoding and a model from the Massive Text Embedding Benchmark.
The aim of the project is to increase and also enhance the advance vision models' capabilities in understanding complex visual textual data interconnected data.
arXiv Detail & Related papers (2024-05-31T15:17:47Z) - ConTextual: Evaluating Context-Sensitive Text-Rich Visual Reasoning in Large Multimodal Models [92.60282074937305]
We introduce ConTextual, a novel dataset featuring human-crafted instructions that require context-sensitive reasoning for text-rich images.
We conduct experiments to assess the performance of 14 foundation models and establish a human performance baseline.
We observe a significant performance gap of 30.8% between GPT-4V and human performance.
arXiv Detail & Related papers (2024-01-24T09:07:11Z) - Visually-Augmented Language Modeling [137.36789885105642]
We propose a novel pre-training framework, named VaLM, to Visually-augment text tokens with retrieved relevant images for Language Modeling.
With the visually-augmented context, VaLM uses a visual knowledge fusion layer to enable multimodal grounded language modeling.
We evaluate the proposed model on various multimodal commonsense reasoning tasks, which require visual information to excel.
arXiv Detail & Related papers (2022-05-20T13:41:12Z) - Language Matters: A Weakly Supervised Pre-training Approach for Scene
Text Detection and Spotting [69.77701325270047]
This paper presents a weakly supervised pre-training method that can acquire effective scene text representations.
Our network consists of an image encoder and a character-aware text encoder that extract visual and textual features.
Experiments show that our pre-trained model improves F-score by +2.5% and +4.8% while transferring its weights to other text detection and spotting networks.
arXiv Detail & Related papers (2022-03-08T08:10:45Z) - From Two to One: A New Scene Text Recognizer with Visual Language
Modeling Network [70.47504933083218]
We propose a Visual Language Modeling Network (VisionLAN), which views the visual and linguistic information as a union.
VisionLAN significantly improves the speed by 39% and adaptively considers the linguistic information to enhance the visual features for accurate recognition.
arXiv Detail & Related papers (2021-08-22T07:56:24Z) - Probing Contextual Language Models for Common Ground with Visual
Representations [76.05769268286038]
We design a probing model that evaluates how effective are text-only representations in distinguishing between matching and non-matching visual representations.
Our findings show that language representations alone provide a strong signal for retrieving image patches from the correct object categories.
Visually grounded language models slightly outperform text-only language models in instance retrieval, but greatly under-perform humans.
arXiv Detail & Related papers (2020-05-01T21:28:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.