Batch-ICL: Effective, Efficient, and Order-Agnostic In-Context Learning
- URL: http://arxiv.org/abs/2401.06469v3
- Date: Wed, 5 Jun 2024 09:13:17 GMT
- Title: Batch-ICL: Effective, Efficient, and Order-Agnostic In-Context Learning
- Authors: Kaiyi Zhang, Ang Lv, Yuhan Chen, Hansen Ha, Tao Xu, Rui Yan,
- Abstract summary: Batch-ICL is an effective, efficient, and order-agnostic inference algorithm for in-context learning.
We show that Batch-ICL consistently outperforms most permutations of ICL examples.
We also develop a novel variant of Batch-ICL featuring multiple "epochs" of meta-optimization.
- Score: 27.729189318779603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, by treating in-context learning (ICL) as a meta-optimization process, we explain why LLMs are sensitive to the order of ICL examples. This understanding leads us to the development of Batch-ICL, an effective, efficient, and order-agnostic inference algorithm for ICL. Differing from the standard N-shot learning approach, Batch-ICL employs $N$ separate 1-shot forward computations and aggregates the resulting meta-gradients. These aggregated meta-gradients are then applied to the forward computation of a zero-shot query to generate the final prediction. This batch processing approach renders the LLM agnostic to the order of ICL examples. Through extensive experiments and analysis, we demonstrate that Batch-ICL consistently outperforms most permutations of ICL examples. In some cases, it even exceeds the performance of the best order for standard ICL, all while reducing the computational resources required. Furthermore, we develop a novel variant of Batch-ICL featuring multiple "epochs" of meta-optimization. This variant implicitly explores permutations of ICL examples, further enhancing ICL performance.
Related papers
- Self-Calibrated Listwise Reranking with Large Language Models [137.6557607279876]
Large language models (LLMs) have been employed in reranking tasks through a sequence-to-sequence approach.
This reranking paradigm requires a sliding window strategy to iteratively handle larger candidate sets.
We propose a novel self-calibrated listwise reranking method, which aims to leverage LLMs to produce global relevance scores for ranking.
arXiv Detail & Related papers (2024-11-07T10:31:31Z) - LLM-based Optimization of Compound AI Systems: A Survey [64.39860384538338]
In a compound AI system, components such as an LLM call, a retriever, a code interpreter, or tools are interconnected.
Recent advancements enable end-to-end optimization of these parameters using an LLM.
This paper presents a survey of the principles and emerging trends in LLM-based optimization of compound AI systems.
arXiv Detail & Related papers (2024-10-21T18:06:25Z) - Instruction Tuning Vs. In-Context Learning: Revisiting Large Language Models in Few-Shot Computational Social Science [0.1499944454332829]
We evaluate the classification performance of large language models (LLMs) using in-context learning (ICL) and instruction tuning (IT)
ICL offers a rapid alternative for task adaptation by learning from examples without explicit gradient updates.
Our research highlights the significant advantages of ICL in handling CSS tasks in few-shot settings.
arXiv Detail & Related papers (2024-09-23T02:43:08Z) - Is In-Context Learning Sufficient for Instruction Following in LLMs? [38.29072578390376]
We show that, while effective, ICL alignment withAL still underperforms compared to instruction fine-tuning on the established benchmark MT-Bench.
We provide the first, to our knowledge, systematic comparison of ICL and instruction fine-tuning (IFT) for instruction following in the low data regime.
arXiv Detail & Related papers (2024-05-30T09:28:56Z) - Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars [66.823588073584]
Large language models (LLMs) have shown impressive capabilities in real-world applications.
The quality of these exemplars in the prompt greatly impacts performance.
Existing methods fail to adequately account for the impact of exemplar ordering on the performance.
arXiv Detail & Related papers (2024-05-25T08:23:05Z) - ParaICL: Towards Robust Parallel In-Context Learning [74.38022919598443]
Large language models (LLMs) have become the norm in natural language processing.
Few-shot in-context learning (ICL) relies on the choice of few-shot demonstration examples.
We propose a novel method named parallel in-context learning (ParaICL)
arXiv Detail & Related papers (2024-03-31T05:56:15Z) - RecDCL: Dual Contrastive Learning for Recommendation [65.6236784430981]
We propose a dual contrastive learning recommendation framework -- RecDCL.
In RecDCL, the FCL objective is designed to eliminate redundant solutions on user-item positive pairs.
The BCL objective is utilized to generate contrastive embeddings on output vectors for enhancing the robustness of the representations.
arXiv Detail & Related papers (2024-01-28T11:51:09Z) - Positional Information Matters for Invariant In-Context Learning: A Case
Study of Simple Function Classes [39.08988313527199]
In-context learning (ICL) refers to the ability of a model to condition on a few in-context demonstrations to generate the answer for a new query input.
Despite the impressive ICL ability of LLMs, ICL in LLMs is sensitive to input demonstrations and limited to short context lengths.
arXiv Detail & Related papers (2023-11-30T02:26:55Z) - OpenICL: An Open-Source Framework for In-context Learning [48.75452105457122]
We introduce OpenICL, an open-source toolkit for In-context Learning (ICL) and large language model evaluation.
OpenICL is research-friendly with a highly flexible architecture that users can easily combine different components to suit their needs.
The effectiveness of OpenICL has been validated on a wide range of NLP tasks, including classification, QA, machine translation, and semantic parsing.
arXiv Detail & Related papers (2023-03-06T06:20:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.