Exploring Effective Factors for Improving Visual In-Context Learning
- URL: http://arxiv.org/abs/2304.04748v1
- Date: Mon, 10 Apr 2023 17:59:04 GMT
- Title: Exploring Effective Factors for Improving Visual In-Context Learning
- Authors: Yanpeng Sun, Qiang Chen, Jian Wang, Jingdong Wang, Zechao Li
- Abstract summary: In-Context Learning (ICL) is to understand a new task via a few demonstrations (aka. prompt) and predict new inputs without tuning the models.
This paper shows that prompt selection and prompt fusion are two major factors that have a direct impact on the inference performance of visual context learning.
We propose a simple framework prompt-SelF for visual in-context learning.
- Score: 56.14208975380607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The In-Context Learning (ICL) is to understand a new task via a few
demonstrations (aka. prompt) and predict new inputs without tuning the models.
While it has been widely studied in NLP, it is still a relatively new area of
research in computer vision. To reveal the factors influencing the performance
of visual in-context learning, this paper shows that prompt selection and
prompt fusion are two major factors that have a direct impact on the inference
performance of visual context learning. Prompt selection is the process of
identifying the most appropriate prompt or example to help the model understand
new tasks. This is important because providing the model with relevant prompts
can help it learn more effectively and efficiently. Prompt fusion involves
combining knowledge from different positions within the large-scale visual
model. By doing this, the model can leverage the diverse knowledge stored in
different parts of the model to improve its performance on new tasks. Based
these findings, we propose a simple framework prompt-SelF for visual in-context
learning. Specifically, we first use the pixel-level retrieval method to select
a suitable prompt, and then use different prompt fusion methods to activate all
the knowledge stored in the large-scale model, and finally ensemble the
prediction results obtained from different prompt fusion methods to obtain the
final prediction results. And we conduct extensive experiments on single-object
segmentation and detection tasks to demonstrate the effectiveness of
prompt-SelF. Remarkably, the prompt-SelF has outperformed OSLSM based
meta-learning in 1-shot segmentation for the first time. This indicated the
great potential of visual in-context learning. The source code and models will
be available at \url{https://github.com/syp2ysy/prompt-SelF}.
Related papers
- Adapting Vision-Language Models to Open Classes via Test-Time Prompt Tuning [50.26965628047682]
Adapting pre-trained models to open classes is a challenging problem in machine learning.
In this paper, we consider combining the advantages of both and come up with a test-time prompt tuning approach.
Our proposed method outperforms all comparison methods on average considering both base and new classes.
arXiv Detail & Related papers (2024-08-29T12:34:01Z) - Spatio-Temporal Context Prompting for Zero-Shot Action Detection [13.22912547389941]
We propose a method which can effectively leverage the rich knowledge of visual-language models to perform Person-Context Interaction.
To address the challenge of recognizing distinct actions by multiple people at the same timestamp, we design the Interest Token Spotting mechanism.
Our method achieves superior results compared to previous approaches and can be further extended to multi-action videos.
arXiv Detail & Related papers (2024-08-28T17:59:05Z) - Enhancing News Summarization with ELearnFit through Efficient In-Context Learning and Efficient Fine-Tuning [0.0]
We leverage large language models (LLMs) to generate coherent summaries for news articles from the XSum dataset.
We find that increasing the number of shots in prompts and utilizing simple templates generally improve the quality of summaries.
We also find that fine-tuning the first layer of LLMs produces better outcomes as compared to fine-tuning other layers or utilizing LoRA.
arXiv Detail & Related papers (2024-05-04T16:48:05Z) - Harnessing Diffusion Models for Visual Perception with Meta Prompts [68.78938846041767]
We propose a simple yet effective scheme to harness a diffusion model for visual perception tasks.
We introduce learnable embeddings (meta prompts) to the pre-trained diffusion models to extract proper features for perception.
Our approach achieves new performance records in depth estimation tasks on NYU depth V2 and KITTI, and in semantic segmentation task on CityScapes.
arXiv Detail & Related papers (2023-12-22T14:40:55Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - Learning without Forgetting for Vision-Language Models [65.49600786387106]
Class-Incremental Learning (CIL) or continual learning is a desired capability in the real world.
Recent advances in Vision-Language Models (VLM) have shown promising capabilities in learning generalizable representations.
We propose PROjectiOn Fusion (PROOF) that enables VLMs to learn without forgetting.
arXiv Detail & Related papers (2023-05-30T17:59:32Z) - Distribution Alignment: A Unified Framework for Long-tail Visual
Recognition [52.36728157779307]
We propose a unified distribution alignment strategy for long-tail visual recognition.
We then introduce a generalized re-weight method in the two-stage learning to balance the class prior.
Our approach achieves the state-of-the-art results across all four recognition tasks with a simple and unified framework.
arXiv Detail & Related papers (2021-03-30T14:09:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.