Unified Demonstration Retriever for In-Context Learning
- URL: http://arxiv.org/abs/2305.04320v2
- Date: Tue, 16 May 2023 05:59:25 GMT
- Title: Unified Demonstration Retriever for In-Context Learning
- Authors: Xiaonan Li, Kai Lv, Hang Yan, Tianyang Lin, Wei Zhu, Yuan Ni, Guotong
Xie, Xiaoling Wang and Xipeng Qiu
- Abstract summary: Unified Demonstration Retriever (textbfUDR) is a single model to retrieve demonstrations for a wide range of tasks.
We propose a multi-task list-wise ranking training framework, with an iterative mining strategy to find high-quality candidates.
Experiments on 30+ tasks across 13 task families and multiple data domains show that UDR significantly outperforms baselines.
- Score: 56.06473069923567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In-context learning is a new learning paradigm where a language model
conditions on a few input-output pairs (demonstrations) and a test input, and
directly outputs the prediction. It has been shown highly dependent on the
provided demonstrations and thus promotes the research of demonstration
retrieval: given a test input, relevant examples are retrieved from the
training set to serve as informative demonstrations for in-context learning.
While previous works focus on training task-specific retrievers for several
tasks separately, these methods are often hard to transfer and scale on various
tasks, and separately trained retrievers incur a lot of parameter storage and
deployment cost. In this paper, we propose Unified Demonstration Retriever
(\textbf{UDR}), a single model to retrieve demonstrations for a wide range of
tasks. To train UDR, we cast various tasks' training signals into a unified
list-wise ranking formulation by language model's feedback. Then we propose a
multi-task list-wise ranking training framework, with an iterative mining
strategy to find high-quality candidates, which can help UDR fully incorporate
various tasks' signals. Experiments on 30+ tasks across 13 task families and
multiple data domains show that UDR significantly outperforms baselines.
Further analyses show the effectiveness of each proposed component and UDR's
strong ability in various scenarios including different LMs (1.3B - 175B),
unseen datasets, varying demonstration quantities, etc.
Related papers
- Less is More: High-value Data Selection for Visual Instruction Tuning [127.38740043393527]
We propose a high-value data selection approach TIVE, to eliminate redundancy within the visual instruction data and reduce the training cost.
Our approach using only about 15% data can achieve comparable average performance to the full-data fine-tuned model across eight benchmarks.
arXiv Detail & Related papers (2024-03-14T16:47:25Z) - In-context Learning with Retrieved Demonstrations for Language Models: A Survey [23.24271704145876]
Few-shot in-context learners (ICL) are adept at adapting to new tasks with just a few demonstrations in the input context.
Instead of using a fixed set of demonstrations, one recent development is to retrieve demonstrations tailored to each input query.
We discuss and compare different design choices for retrieval models, retrieval training procedures, and inference algorithms.
arXiv Detail & Related papers (2024-01-21T23:34:42Z) - Meta-training with Demonstration Retrieval for Efficient Few-shot
Learning [11.723856248352007]
Large language models show impressive results on few-shot NLP tasks.
These models are memory and computation-intensive.
We propose meta-training with demonstration retrieval.
arXiv Detail & Related papers (2023-06-30T20:16:22Z) - Diffusion Model is an Effective Planner and Data Synthesizer for
Multi-Task Reinforcement Learning [101.66860222415512]
Multi-Task Diffusion Model (textscMTDiff) is a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis.
For generative planning, we find textscMTDiff outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D.
arXiv Detail & Related papers (2023-05-29T05:20:38Z) - Dr.ICL: Demonstration-Retrieved In-context Learning [29.142262267850704]
In-context learning (ICL) teaching a large language model to perform a task with few-shot demonstrations has emerged as a strong paradigm for using LLMs.
Recent research suggests that retrieving semantically similar demonstrations to the input from a pool of available demonstrations results in better performance.
This work expands the applicability of retrieval-based ICL approaches by demonstrating that even simple word-overlap similarity measures such as BM25 outperform randomly selected demonstrations.
arXiv Detail & Related papers (2023-05-23T14:55:25Z) - ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for
Document Information Extraction [56.790794611002106]
Large language models (LLMs) have demonstrated remarkable results in various natural language processing (NLP) tasks with in-context learning.
We propose a simple but effective in-context learning framework called ICL-D3IE.
Specifically, we extract the most difficult and distinct segments from hard training documents as hard demonstrations.
arXiv Detail & Related papers (2023-03-09T06:24:50Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Learning To Retrieve Prompts for In-Context Learning [33.176481861880724]
We propose an efficient method for retrieving prompts for in-context learning using annotated data and a LM.
We evaluate our approach on three sequence-to-sequence tasks where language utterances are mapped to meaning representations.
arXiv Detail & Related papers (2021-12-16T05:17:56Z) - DiVA: Diverse Visual Feature Aggregation for Deep Metric Learning [83.48587570246231]
Visual Similarity plays an important role in many computer vision applications.
Deep metric learning (DML) is a powerful framework for learning such similarities.
We propose and study multiple complementary learning tasks, targeting conceptually different data relationships.
We learn a single model to aggregate their training signals, resulting in strong generalization and state-of-the-art performance.
arXiv Detail & Related papers (2020-04-28T12:26:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.