Exploring the Relationship between In-Context Learning and Instruction
Tuning
- URL: http://arxiv.org/abs/2311.10367v1
- Date: Fri, 17 Nov 2023 07:40:46 GMT
- Title: Exploring the Relationship between In-Context Learning and Instruction
Tuning
- Authors: Hanyu Duan, Yixuan Tang, Yi Yang, Ahmed Abbasi, Kar Yan Tam
- Abstract summary: In-Context Learning (ICL) and Instruction Tuning (IT) are two primary paradigms of adopting Large Language Models to downstream applications.
In ICL, a set of demonstrations are provided at inference time but the LLM's parameters are not updated.
In IT, a set of demonstrations are used to tune LLM's parameters in training time but no demonstrations are used at inference time.
- Score: 18.186126518966017
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In-Context Learning (ICL) and Instruction Tuning (IT) are two primary
paradigms of adopting Large Language Models (LLMs) to downstream applications.
However, they are significantly different. In ICL, a set of demonstrations are
provided at inference time but the LLM's parameters are not updated. In IT, a
set of demonstrations are used to tune LLM's parameters in training time but no
demonstrations are used at inference time. Although a growing body of
literature has explored ICL and IT, studies on these topics have largely been
conducted in isolation, leading to a disconnect between these two paradigms. In
this work, we explore the relationship between ICL and IT by examining how the
hidden states of LLMs change in these two paradigms. Through carefully designed
experiments conducted with LLaMA-2 (7B and 13B), we find that ICL is implicit
IT. In other words, ICL changes an LLM's hidden states as if the demonstrations
were used to instructionally tune the model. Furthermore, the convergence
between ICL and IT is largely contingent upon several factors related to the
provided demonstrations. Overall, this work offers a unique perspective to
explore the connection between ICL and IT and sheds light on understanding the
behaviors of LLM.
Related papers
- What Do Language Models Learn in Context? The Structured Task Hypothesis [89.65045443150889]
Large language models (LLMs) learn a novel task from in-context examples presented in a demonstration, termed in-context learning (ICL)
One popular hypothesis explains ICL by task selection.
Another popular hypothesis is that ICL is a form of meta-learning, i.e., the models learn a learning algorithm at pre-training time and apply it to the demonstration.
arXiv Detail & Related papers (2024-06-06T16:15:34Z) - The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition [74.04775677110179]
In-context Learning (ICL) has emerged as a powerful paradigm for performing natural language tasks with Large Language Models (LLM)
We show that LLMs have strong yet inconsistent priors in emotion recognition that ossify their predictions.
Our results suggest that caution is needed when using ICL with larger LLMs for affect-centered tasks outside their pre-training domain.
arXiv Detail & Related papers (2024-03-25T19:07:32Z) - Rectifying Demonstration Shortcut in In-Context Learning [15.08431909212102]
Large language models (LLMs) are able to solve various tasks with only a few demonstrations utilizing their in-context learning (ICL) abilities.
LLMs often rely on their pre-trained semantic priors of demonstrations rather than on the input-label relationships to proceed with ICL prediction.
arXiv Detail & Related papers (2024-03-14T15:30:14Z) - Comparable Demonstrations are Important in In-Context Learning: A Novel
Perspective on Demonstration Selection [22.29452683679149]
In-Context Learning (ICL) is an important paradigm for adapting Large Language Models (LLMs) to downstream tasks through a few demonstrations.
This study explores the ICL mechanisms from a novel perspective, providing a deeper insight into the demonstration selection strategy for ICL.
arXiv Detail & Related papers (2023-12-12T18:05:46Z) - Understanding and Improving In-Context Learning on Vision-language
Models [42.7212469140844]
In-context learning (ICL) on large language models (LLMs) has received great attention, and this technique can be applied to vision-language models (VLMs)
This study investigates the significance of both visual and language information.
We propose a simple yet effective approach, termed Mixed Modality In-Context Example Selection (MMICES)
arXiv Detail & Related papers (2023-11-29T19:08:11Z) - Hint-enhanced In-Context Learning wakes Large Language Models up for knowledge-intensive tasks [54.153914606302486]
In-context learning (ICL) ability has emerged with the increasing scale of large language models (LLMs)
We propose a new paradigm called Hint-enhanced In-Context Learning (HICL) to explore the power of ICL in open-domain question answering.
arXiv Detail & Related papers (2023-11-03T14:39:20Z) - Do pretrained Transformers Learn In-Context by Gradient Descent? [21.23795112800977]
In this paper, we investigate the emergence of In-Context Learning (ICL) in language models pre-trained on natural data (LLaMa-7B)
We find that ICL and Gradient Descent (GD) modify the output distribution of language models differently.
These results indicate that emphthe equivalence between ICL and GD remains an open hypothesis and calls for further studies.
arXiv Detail & Related papers (2023-10-12T17:32:09Z) - Label Words are Anchors: An Information Flow Perspective for
Understanding In-Context Learning [77.7070536959126]
In-context learning (ICL) emerges as a promising capability of large language models (LLMs)
In this paper, we investigate the working mechanism of ICL through an information flow lens.
We introduce an anchor re-weighting method to improve ICL performance, a demonstration compression technique to expedite inference, and an analysis framework for diagnosing ICL errors in GPT2-XL.
arXiv Detail & Related papers (2023-05-23T15:26:20Z) - Iterative Forward Tuning Boosts In-Context Learning in Language Models [88.25013390669845]
In this study, we introduce a novel two-stage framework to boost in-context learning in large language models (LLMs)
Specifically, our framework delineates the ICL process into two distinct stages: Deep-Thinking and test stages.
The Deep-Thinking stage incorporates a unique attention mechanism, i.e., iterative enhanced attention, which enables multiple rounds of information accumulation.
arXiv Detail & Related papers (2023-05-22T13:18:17Z) - What In-Context Learning "Learns" In-Context: Disentangling Task
Recognition and Task Learning [24.395288160951118]
Large language models (LLMs) exploit in-context learning (ICL) to solve tasks with only a few demonstrations.
We characterize two ways through which ICL leverages demonstrations.
We show that models can achieve non-trivial performance with only TR, and TR does not further improve with larger models or more demonstrations.
arXiv Detail & Related papers (2023-05-16T18:05:19Z) - ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for
Document Information Extraction [56.790794611002106]
Large language models (LLMs) have demonstrated remarkable results in various natural language processing (NLP) tasks with in-context learning.
We propose a simple but effective in-context learning framework called ICL-D3IE.
Specifically, we extract the most difficult and distinct segments from hard training documents as hard demonstrations.
arXiv Detail & Related papers (2023-03-09T06:24:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.