Improving Language Understanding from Screenshots
- URL: http://arxiv.org/abs/2402.14073v1
- Date: Wed, 21 Feb 2024 19:01:03 GMT
- Title: Improving Language Understanding from Screenshots
- Authors: Tianyu Gao, Zirui Wang, Adithya Bhaskar, Danqi Chen
- Abstract summary: An emerging family of language models (LMs) can process both text and images within a single visual view.
Existing screenshot LMs lag behind text-only models on language understanding tasks.
We propose a novel Patch-and-Text Prediction objective, which masks and recovers both image patches of screenshots and text within screenshots.
- Score: 56.40401271149811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An emerging family of language models (LMs), capable of processing both text
and images within a single visual view, has the promise to unlock complex tasks
such as chart understanding and UI navigation. We refer to these models as
screenshot language models. Despite their appeal, existing screenshot LMs
substantially lag behind text-only models on language understanding tasks. To
close this gap, we adopt a simplified setting where the model inputs are
plain-text-rendered screenshots, and we focus on improving the text ability of
screenshot LMs. We propose a novel Patch-and-Text Prediction (PTP) objective,
which masks and recovers both image patches of screenshots and text within
screenshots. We also conduct extensive ablation studies on masking rates and
patch sizes, as well as designs for improving training stability. Our
pre-trained model, while solely taking visual inputs, achieves comparable
performance with BERT on 6 out of 8 GLUE tasks (within 2%) and improves up to
8% over prior work. Additionally, we extend PTP to train autoregressive
screenshot LMs and demonstrate its effectiveness--our models can significantly
reduce perplexity by utilizing the screenshot context. Together, we hope our
findings can inspire future research on developing powerful screenshot LMs and
extending their reach to broader applications.
Related papers
- Attention Prompting on Image for Large Vision-Language Models [63.794304207664176]
We propose a new prompting technique named Attention Prompting on Image.
We generate an attention heatmap for the input image dependent on the text query with an auxiliary model like CLIP.
Experiments on various vison-language benchmarks verify the effectiveness of our technique.
arXiv Detail & Related papers (2024-09-25T17:59:13Z) - Enhancing Vision-Language Pre-training with Rich Supervisions [60.269564094889446]
We propose Strongly Supervised pre-training with ScreenShots (S4)
S4 is a novel pre-training paradigm for Vision-Language Models using data from large-scale web screenshot rendering.
We demonstrate that, compared to current screenshot pre-training objectives, our innovative pre-training method significantly enhances performance of image-to-text model in nine varied and popular downstream tasks.
arXiv Detail & Related papers (2024-03-05T22:14:58Z) - Language Quantized AutoEncoders: Towards Unsupervised Text-Image
Alignment [81.73717488887938]
Language-Quantized AutoEncoder (LQAE) learns to align text-image data in an unsupervised manner by leveraging pretrained language models.
LQAE learns to represent similar images with similar clusters of text tokens, thereby aligning these two modalities without the use of aligned text-image pairs.
This enables few-shot image classification with large language models (e.g., GPT-3) as well as linear classification of images based on BERT text features.
arXiv Detail & Related papers (2023-02-02T06:38:44Z) - Pix2Struct: Screenshot Parsing as Pretraining for Visual Language
Understanding [58.70423899829642]
We present Pix2Struct, a pretrained image-to-text model for purely visual language understanding.
We show that a single pretrained model can achieve state-of-the-art results in six out of nine tasks across four domains.
arXiv Detail & Related papers (2022-10-07T06:42:06Z) - WAVPROMPT: Towards Few-Shot Spoken Language Understanding with Frozen
Language Models [57.557319372969495]
Large-scale auto-regressive language models pretrained on massive text have demonstrated their impressive ability to perform new natural language tasks.
Recent studies further show that such a few-shot learning ability can be extended to the text-image setting by training an encoder to encode the images into embeddings.
We propose a novel speech understanding framework, WavPrompt, where we finetune a wav2vec model to generate a sequence of audio embeddings understood by the language model.
arXiv Detail & Related papers (2022-03-29T19:08:55Z) - Visual Grounding Strategies for Text-Only Natural Language Processing [1.2183405753834562]
multimodal extensions of BERT allow a joint modeling of texts and images that lead to state-of-the-art results on multimodal tasks such as Visual Question Answering.
Here, we leverage multimodal modeling for purely textual tasks with the expectation that the multimodal pretraining provides a grounding that can improve text processing accuracy.
A first type of strategy, referred to as it transferred grounding consists in applying multimodal models to text-only tasks using a placeholder to replace image input.
The second one, which we call it associative grounding, harnesses image retrieval to match texts with related images during both
arXiv Detail & Related papers (2021-03-25T16:03:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.