GPT4Image: Can Large Pre-trained Models Help Vision Models on Perception
Tasks?
- URL: http://arxiv.org/abs/2306.00693v2
- Date: Wed, 7 Jun 2023 13:59:25 GMT
- Title: GPT4Image: Can Large Pre-trained Models Help Vision Models on Perception
Tasks?
- Authors: Ning Ding, Yehui Tang, Zhongqian Fu, Chao Xu, Kai Han, Yunhe Wang
- Abstract summary: We present a new learning paradigm in which the knowledge extracted from large pre-trained models are utilized to help models like CNN and ViT learn enhanced representations.
We feed detailed descriptions into a pre-trained encoder to extract text embeddings with rich semantic information that encodes the content of images.
- Score: 51.22096780511165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent upsurge in pre-trained large models (e.g. GPT-4) has swept across
the entire deep learning community. Such powerful large language models (LLMs)
demonstrate advanced generative ability and multimodal understanding
capability, which quickly achieve new state-of-the-art performances on a
variety of benchmarks. The pre-trained LLM usually plays the role as a
universal AI model that can conduct various tasks, including context reasoning,
article analysis and image content comprehension. However, considering the
prohibitively high memory and computational cost for implementing such a large
model, the conventional models (such as CNN and ViT), are still essential for
many visual perception tasks. In this paper, we propose to enhance the
representation ability of ordinary vision models for perception tasks (e.g.
image classification) by taking advantage of large pre-trained models. We
present a new learning paradigm in which the knowledge extracted from large
pre-trained models are utilized to help models like CNN and ViT learn enhanced
representations and achieve better performance. Firstly, we curate a high
quality description set by prompting a multimodal LLM to generate descriptive
text for all training images. Furthermore, we feed these detailed descriptions
into a pre-trained encoder to extract text embeddings with rich semantic
information that encodes the content of images. During training, text
embeddings will serve as extra supervising signals and be aligned with image
representations learned by vision models. The alignment process helps vision
models learn better and achieve higher accuracy with the assistance of
pre-trained LLMs. We conduct extensive experiments to verify that the proposed
algorithm consistently improves the performance for various vision models with
heterogeneous architectures.
Related papers
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension [131.14381425260706]
We introduce Self-Training on Image (STIC), which emphasizes a self-training approach specifically for image comprehension.
First, the model self-constructs a preference for image descriptions using unlabeled images.
To further self-improve reasoning on the extracted visual information, we let the model reuse a small portion of existing instruction-tuning data.
arXiv Detail & Related papers (2024-05-30T05:53:49Z) - HRVDA: High-Resolution Visual Document Assistant [32.51417315241559]
We propose a High-Resolution Visual Document Assistant (HRVDA) to bridge the gap between MLLMs and visual document understanding.
HRVDA employs a content filtering mechanism and an instruction filtering module to filter out the content-agnostic visual tokens and instruction-agnostic visual tokens.
Our model achieves state-of-the-art performance across multiple document understanding datasets.
arXiv Detail & Related papers (2024-04-10T11:10:50Z) - Chain-of-Spot: Interactive Reasoning Improves Large Vision-Language Models [81.71651422951074]
Chain-of-Spot (CoS) method is a novel approach that enhances feature extraction by focusing on key regions of interest.
This technique allows LVLMs to access more detailed visual information without altering the original image resolution.
Our empirical findings demonstrate a significant improvement in LVLMs' ability to understand and reason about visual content.
arXiv Detail & Related papers (2024-03-19T17:59:52Z) - Sequential Modeling Enables Scalable Learning for Large Vision Models [120.91839619284431]
We introduce a novel sequential modeling approach which enables learning a Large Vision Model (LVM) without making use of any linguistic data.
We define a common format, "visual sentences", in which we can represent raw images and videos as well as annotated data sources.
arXiv Detail & Related papers (2023-12-01T18:59:57Z) - iBoot: Image-bootstrapped Self-Supervised Video Representation Learning [45.845595749486215]
Video self-supervised learning (SSL) suffers from added challenges: video datasets are typically not as large as image datasets.
We propose to utilize a strong image-based model, pre-trained with self- or language supervision, in a video representation learning framework.
The proposed algorithm is shown to learn much more efficiently in less epochs and with a smaller batch.
arXiv Detail & Related papers (2022-06-16T17:42:48Z) - Reinforcement Learning with Action-Free Pre-Training from Videos [95.25074614579646]
We introduce a framework that learns representations useful for understanding the dynamics via generative pre-training on videos.
Our framework significantly improves both final performances and sample-efficiency of vision-based reinforcement learning.
arXiv Detail & Related papers (2022-03-25T19:44:09Z) - Behind the Scene: Revealing the Secrets of Pre-trained
Vision-and-Language Models [65.19308052012858]
Recent Transformer-based large-scale pre-trained models have revolutionized vision-and-language (V+L) research.
We present VALUE, a set of meticulously designed probing tasks to decipher the inner workings of multimodal pre-training.
Key observations: Pre-trained models exhibit a propensity for attending over text rather than images during inference.
arXiv Detail & Related papers (2020-05-15T01:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.