What Makes Good In-context Demonstrations for Code Intelligence Tasks
with LLMs?
- URL: http://arxiv.org/abs/2304.07575v2
- Date: Tue, 8 Aug 2023 13:46:02 GMT
- Title: What Makes Good In-context Demonstrations for Code Intelligence Tasks
with LLMs?
- Authors: Shuzheng Gao, Xin-Cheng Wen, Cuiyun Gao, Wenxuan Wang, Hongyu Zhang,
Michael R. Lyu
- Abstract summary: Large language models have shown the ability of in-context learning (ICL)
ICL employs task instructions and a few examples as demonstrations, and then inputs the demonstrations to the language models for making predictions.
It is important to systematically investigate how to construct a good demonstration for code-related tasks.
- Score: 60.668318972782295
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pre-trained models of source code have gained widespread popularity in many
code intelligence tasks. Recently, with the scaling of the model and corpus
size, large language models have shown the ability of in-context learning
(ICL). ICL employs task instructions and a few examples as demonstrations, and
then inputs the demonstrations to the language models for making predictions.
This new learning paradigm is training-free and has shown impressive
performance in various natural language processing and code intelligence tasks.
However, the performance of ICL heavily relies on the quality of
demonstrations, e.g., the selected examples. It is important to systematically
investigate how to construct a good demonstration for code-related tasks. In
this paper, we empirically explore the impact of three key factors on the
performance of ICL in code intelligence tasks: the selection, order, and number
of demonstration examples. We conduct extensive experiments on three code
intelligence tasks including code summarization, bug fixing, and program
synthesis. Our experimental results demonstrate that all the above three
factors dramatically impact the performance of ICL in code intelligence tasks.
Additionally, we summarize our findings and provide takeaway suggestions on how
to construct effective demonstrations, taking into account these three
perspectives. We also show that a carefully-designed demonstration based on our
findings can lead to substantial improvements over widely-used demonstration
construction methods, e.g., improving BLEU-4, EM, and EM by at least 9.90%,
175.96%, and 50.81% on code summarization, bug fixing, and program synthesis,
respectively
Related papers
- Instructive Code Retriever: Learn from Large Language Model's Feedback for Code Intelligence Tasks [10.867880635762395]
We introduce a novel approach named Instructive Code Retriever (ICR)
ICR is designed to retrieve examples that enhance model inference across various code intelligence tasks and datasets.
We evaluate our model's effectiveness on various tasks, i.e., code summarization, program synthesis, and bug fixing.
arXiv Detail & Related papers (2024-10-15T05:44:00Z) - MILE: A Mutation Testing Framework of In-Context Learning Systems [5.419884861365132]
We propose a mutation testing framework designed to characterize the quality and effectiveness of test data for ICL systems.
First, we propose several mutation operators specialized for ICL demonstrations, as well as corresponding mutation scores for ICL test sets.
With comprehensive experiments, we showcase the effectiveness of our framework in evaluating the reliability and quality of ICL test suites.
arXiv Detail & Related papers (2024-09-07T13:51:42Z) - DETAIL: Task DEmonsTration Attribution for Interpretable In-context Learning [75.68193159293425]
In-context learning (ICL) allows transformer-based language models to learn a specific task with a few "task demonstrations" without updating their parameters.
We propose an influence function-based attribution technique, DETAIL, that addresses the specific characteristics of ICL.
We experimentally prove the wide applicability of DETAIL by showing our attribution scores obtained on white-box models are transferable to black-box models in improving model performance.
arXiv Detail & Related papers (2024-05-22T15:52:52Z) - Does In-Context Learning Really Learn? Rethinking How Large Language Models Respond and Solve Tasks via In-Context Learning [41.606494950216764]
In-context Learning (ICL) has emerged as a powerful capability alongside the development of scaled-up large language models (LLMs)
This paper decomposes the overall performance of ICL into three dimensions, label space, format, and discrimination.
We show that ICL exhibits significant efficacy in regulating the label space and format, which helps LLMs respond to desired label words.
arXiv Detail & Related papers (2024-04-11T08:20:10Z) - Code Representation Learning At Scale [75.04686476303436]
We fuel code representation learning with a vast amount of code data via a two-stage pretraining scheme.
We first train the encoders via a mix that leverages both randomness in masking language modeling and the structure aspect of programming language.
We then enhance the representations via contrastive learning with hard negative and hard positive constructed in an unsupervised manner.
arXiv Detail & Related papers (2024-02-02T22:19:15Z) - In-context Learning with Retrieved Demonstrations for Language Models: A Survey [23.24271704145876]
Few-shot in-context learners (ICL) are adept at adapting to new tasks with just a few demonstrations in the input context.
Instead of using a fixed set of demonstrations, one recent development is to retrieve demonstrations tailored to each input query.
We discuss and compare different design choices for retrieval models, retrieval training procedures, and inference algorithms.
arXiv Detail & Related papers (2024-01-21T23:34:42Z) - Identifying and Analyzing Task-Encoding Tokens in Large Language Models [55.03191279766383]
In this paper, we identify and analyze task-encoding tokens on whose representations the task performance depends.
We show that template and stopword tokens are the most prone to be task-encoding.
Our work sheds light on how large language models (LLMs) learn to perform a task from demonstrations, deepens our understanding of the varied roles different types of tokens play in LLMs, and provides insights for avoiding instability from improperly utilizing task-encoding tokens.
arXiv Detail & Related papers (2024-01-20T20:55:21Z) - Scaling In-Context Demonstrations with Structured Attention [75.41845145597875]
We propose a better architectural design for in-context learning.
Structured Attention for In-Context Learning replaces the full-attention by a structured attention mechanism.
We show that SAICL achieves comparable or better performance than full attention while obtaining up to 3.4x inference speed-up.
arXiv Detail & Related papers (2023-07-05T23:26:01Z) - ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for
Document Information Extraction [56.790794611002106]
Large language models (LLMs) have demonstrated remarkable results in various natural language processing (NLP) tasks with in-context learning.
We propose a simple but effective in-context learning framework called ICL-D3IE.
Specifically, we extract the most difficult and distinct segments from hard training documents as hard demonstrations.
arXiv Detail & Related papers (2023-03-09T06:24:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.