Visual In-Context Learning for Large Vision-Language Models
- URL: http://arxiv.org/abs/2402.11574v1
- Date: Sun, 18 Feb 2024 12:43:38 GMT
- Title: Visual In-Context Learning for Large Vision-Language Models
- Authors: Yucheng Zhou, Xiang Li, Qianning Wang, Jianbing Shen
- Abstract summary: In Large Visual Language Models (LVLMs) the efficacy of In-Context Learning (ICL) remains limited by challenges in cross-modal interactions and representation disparities.
We introduce a novel Visual In-Context Learning (VICL) method comprising Visual Demonstration Retrieval, Intent-Oriented Image Summarization, and Intent-Oriented Demonstration Composition.
Our approach retrieves images via ''Retrieval & Rerank'' paradigm, summarises images with task intent and task-specific visual parsing, and composes language-based demonstrations.
- Score: 62.5507897575317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Large Visual Language Models (LVLMs), the efficacy of In-Context Learning
(ICL) remains limited by challenges in cross-modal interactions and
representation disparities. To overcome these challenges, we introduce a novel
Visual In-Context Learning (VICL) method comprising Visual Demonstration
Retrieval, Intent-Oriented Image Summarization, and Intent-Oriented
Demonstration Composition. Our approach retrieves images via ''Retrieval &
Rerank'' paradigm, summarises images with task intent and task-specific visual
parsing, and composes language-based demonstrations that reduce token count and
alleviate cross-modal interaction problem. Experimental evaluations on five
visual reasoning datasets demonstrate the effectiveness of our method.
Moreover, our extensive experiments leverage information flow analysis to
elucidate the effectiveness of our method, and investigate the impact of length
and position of demonstrations for LVLM. The use of in-context unlearning
further shows promise in resetting specific model knowledge without retraining.
Related papers
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-Language Models [81.71651422951074]
Chain-of-Spot (CoS) method is a novel approach that enhances feature extraction by focusing on key regions of interest.
This technique allows LVLMs to access more detailed visual information without altering the original image resolution.
Our empirical findings demonstrate a significant improvement in LVLMs' ability to understand and reason about visual content.
arXiv Detail & Related papers (2024-03-19T17:59:52Z) - Improving In-Context Learning in Diffusion Models with Visual
Context-Modulated Prompts [83.03471704115786]
We introduce improved Prompt Diffusion (iPromptDiff) in this study.
iPromptDiff integrates an end-to-end trained vision encoder that converts visual context into an embedding vector.
We show that a diffusion-based vision foundation model, when equipped with this visual context-modulated text guidance and a standard ControlNet structure, exhibits versatility and robustness across a variety of training tasks.
arXiv Detail & Related papers (2023-12-03T14:15:52Z) - Understanding and Improving In-Context Learning on Vision-language
Models [42.7212469140844]
In-context learning (ICL) on large language models (LLMs) has received great attention, and this technique can be applied to vision-language models (VLMs)
This study investigates the significance of both visual and language information.
We propose a simple yet effective approach, termed Mixed Modality In-Context Example Selection (MMICES)
arXiv Detail & Related papers (2023-11-29T19:08:11Z) - Seeing What You Miss: Vision-Language Pre-training with Semantic
Completion Learning [22.464424641734652]
Cross-modal alignment is essential for vision-language pre-training models.
We propose a novel Semantic Completion Learning task to facilitate global-to-local alignment.
We also present a flexible vision encoder, which enables our model to perform image-text and video-text multimodal tasks simultaneously.
arXiv Detail & Related papers (2022-11-24T06:39:16Z) - Visual Adversarial Imitation Learning using Variational Models [60.69745540036375]
Reward function specification remains a major impediment for learning behaviors through deep reinforcement learning.
Visual demonstrations of desired behaviors often presents an easier and more natural way to teach agents.
We develop a variational model-based adversarial imitation learning algorithm.
arXiv Detail & Related papers (2021-07-16T00:15:18Z) - Learning Task Informed Abstractions [10.920599910769276]
We propose learning Task Informed Abstractions (TIA) that explicitly separates reward-correlated visual features from distractors.
TIA leads to significant performance gains over state-of-the-art methods on many visual control tasks.
arXiv Detail & Related papers (2021-06-29T17:56:11Z) - Heterogeneous Contrastive Learning: Encoding Spatial Information for
Compact Visual Representations [183.03278932562438]
This paper presents an effective approach that adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations.
We show that our approach achieves higher efficiency in visual representations and thus delivers a key message to inspire the future research of self-supervised visual representation learning.
arXiv Detail & Related papers (2020-11-19T16:26:25Z) - Object Relational Graph with Teacher-Recommended Learning for Video
Captioning [92.48299156867664]
We propose a complete video captioning system including both a novel model and an effective training strategy.
Specifically, we propose an object relational graph (ORG) based encoder, which captures more detailed interaction features to enrich visual representation.
Meanwhile, we design a teacher-recommended learning (TRL) method to make full use of the successful external language model (ELM) to integrate the abundant linguistic knowledge into the caption model.
arXiv Detail & Related papers (2020-02-26T15:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.