Finding Visual Task Vectors
- URL: http://arxiv.org/abs/2404.05729v2
- Date: Mon, 07 Oct 2024 17:10:52 GMT
- Title: Finding Visual Task Vectors
- Authors: Alberto Hojel, Yutong Bai, Trevor Darrell, Amir Globerson, Amir Bar,
- Abstract summary: Visual Prompting is a technique for teaching models to perform a visual task via in-context examples, without any additional training.
We analyze the activations of MAE-VQGAN, a recent Visual Prompting model, and find task vectors, activations that encode task-specific information.
- Score: 74.67336516908776
- License:
- Abstract: Visual Prompting is a technique for teaching models to perform a visual task via in-context examples, without any additional training. In this work, we analyze the activations of MAE-VQGAN, a recent Visual Prompting model, and find task vectors, activations that encode task-specific information. Equipped with this insight, we demonstrate that it is possible to identify the task vectors and use them to guide the network towards performing different tasks without providing any input-output examples. To find task vectors, we compute the average intermediate activations per task and use the REINFORCE algorithm to search for the subset of task vectors. The resulting task vectors guide the model towards performing a task better than the original model without the need for input-output examples.
Related papers
- Learning Task Representations from In-Context Learning [73.72066284711462]
Large language models (LLMs) have demonstrated remarkable proficiency in in-context learning.
We introduce an automated formulation for encoding task information in ICL prompts as a function of attention heads.
We show that our method's effectiveness stems from aligning the distribution of the last hidden state with that of an optimally performing in-context-learned model.
arXiv Detail & Related papers (2025-02-08T00:16:44Z) - Task Vectors in In-Context Learning: Emergence, Formation, and Benefit [17.72043522825441]
We investigate the formation of task vectors in a controlled setting using models trained from scratch on synthetic datasets.
Our findings confirm that task vectors naturally emerge under certain conditions, but the tasks may be relatively weakly and/or non-locally encoded within the model.
To promote strong task vectors encoded at a prescribed location within the model, we propose an auxiliary training mechanism based on a task vector prompting loss.
arXiv Detail & Related papers (2025-01-16T01:54:23Z) - Multi-Task Model Merging via Adaptive Weight Disentanglement [69.7292615212444]
We introduce an Adaptive Weight Disentanglement method for model merging.
We successfully extract redundant vectors, and after their subtraction, the task vectors retain robust performance.
arXiv Detail & Related papers (2024-11-27T20:08:55Z) - Task Vectors are Cross-Modal [58.19152818504624]
We investigate the internal representations of vision-and-language models (VLMs)
We consider tasks specified through examples or instructions, using either text or image inputs.
We find that conceptually similar tasks are mapped to similar task vector representations, regardless of how they are specified.
arXiv Detail & Related papers (2024-10-29T17:59:45Z) - Task Prompt Vectors: Effective Initialization through Multi-Task Soft-Prompt Transfer [0.6053347262128919]
We introduce Task Prompt Vectors, created by element-wise difference between weights of tuned soft-prompts and their random initialization.
We show that task prompt vectors can be used in low-resource settings to effectively initialize prompt tuning on similar tasks.
This allows prompt arithmetics with the pre-trained vectors from different tasks.
arXiv Detail & Related papers (2024-08-02T09:00:03Z) - Editing Models with Task Arithmetic [69.97273155842966]
Changing how pre-trained models behave is a common practice when developing machine learning systems.
We build task vectors by subtracting the weights of a pre-trained model from the weights of the same model after fine-tuning on a task.
We show that these task vectors can be modified and combined together through arithmetic operations such as negation and addition.
arXiv Detail & Related papers (2022-12-08T05:50:53Z) - Active Multi-Task Representation Learning [50.13453053304159]
We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
arXiv Detail & Related papers (2022-02-02T08:23:24Z) - Analysis and Prediction of NLP Models Via Task Embeddings [25.311690222754454]
We propose MetaEval, a collection of $101$ NLP tasks.
We fit a single transformer to all MetaEval tasks jointly while conditioning it on learned embeddings.
The resulting task embeddings enable a novel analysis of the space of tasks.
arXiv Detail & Related papers (2021-12-10T16:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.