Mind the instructions: a holistic evaluation of consistency and
interactions in prompt-based learning
- URL: http://arxiv.org/abs/2310.13486v1
- Date: Fri, 20 Oct 2023 13:25:24 GMT
- Title: Mind the instructions: a holistic evaluation of consistency and
interactions in prompt-based learning
- Authors: Lucas Weber, Elia Bruni and Dieuwke Hupkes
- Abstract summary: We present a detailed analysis of which design choices cause instabilities and inconsistencies in task predictions.
We show how spurious correlations between input distributions and labels form only a minor problem for prompted models.
We statistically analyse the results to show which factors are the most influential, interactive or stable.
- Score: 14.569770617709073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finding the best way of adapting pre-trained language models to a task is a
big challenge in current NLP. Just like the previous generation of task-tuned
models (TT), models that are adapted to tasks via in-context-learning (ICL) are
robust in some setups but not in others. Here, we present a detailed analysis
of which design choices cause instabilities and inconsistencies in LLM
predictions. First, we show how spurious correlations between input
distributions and labels -- a known issue in TT models -- form only a minor
problem for prompted models. Then, we engage in a systematic, holistic
evaluation of different factors that have been found to influence predictions
in a prompting setup. We test all possible combinations of a range of factors
on both vanilla and instruction-tuned (IT) LLMs of different scale and
statistically analyse the results to show which factors are the most
influential, interactive or stable. Our results show which factors can be used
without precautions and which should be avoided or handled with care in most
settings.
Related papers
- Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.
We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.
Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors [74.04775677110179]
In-context Learning (ICL) has become the primary method for performing natural language tasks with Large Language Models (LLMs)
In this work, we examine whether this is the result of the aggregation used in corresponding datasets, where trying to combine low-agreement, disparate annotations might lead to annotation artifacts that create detrimental noise in the prompt.
Our results indicate that aggregation is a confounding factor in the modeling of subjective tasks, and advocate focusing on modeling individuals instead.
arXiv Detail & Related papers (2024-10-17T17:16:00Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - A Unified Causal View of Instruction Tuning [76.1000380429553]
We develop a meta Structural Causal Model (meta-SCM) to integrate different NLP tasks under a single causal structure of the data.
Key idea is to learn task-required causal factors and only use those to make predictions for a given task.
arXiv Detail & Related papers (2024-02-09T07:12:56Z) - The ICL Consistency Test [14.569770617709073]
Large language models (LLMs) are adapted to tasks via prompt-based methods like in-context-learning (ICL)
This lack of consistency in prompt-based learning hints at a lack of robust generalisation.
We here introduce the ICL consistency test -- a contribution to the GenBench collaborative benchmark task (CBT)
arXiv Detail & Related papers (2023-12-08T10:22:43Z) - Active Learning Principles for In-Context Learning with Large Language
Models [65.09970281795769]
This paper investigates how Active Learning algorithms can serve as effective demonstration selection methods for in-context learning.
We show that in-context example selection through AL prioritizes high-quality examples that exhibit low uncertainty and bear similarity to the test examples.
arXiv Detail & Related papers (2023-05-23T17:16:04Z) - Think Twice: Measuring the Efficiency of Eliminating Prediction
Shortcuts of Question Answering Models [3.9052860539161918]
We propose a simple method for measuring a scale of models' reliance on any identified spurious feature.
We assess the robustness towards a large set of known and newly found prediction biases for various pre-trained models and debiasing methods in Question Answering (QA)
We find that while existing debiasing methods can mitigate reliance on a chosen spurious feature, the OOD performance gains of these methods can not be explained by mitigated reliance on biased features.
arXiv Detail & Related papers (2023-05-11T14:35:00Z) - Uncertainty Estimation for Language Reward Models [5.33024001730262]
Language models can learn a range of capabilities from unsupervised training on text corpora.
It is often easier for humans to choose between options than to provide labeled data, and prior work has achieved state-of-the-art performance by training a reward model from such preference comparisons.
We seek to address these problems via uncertainty estimation, which can improve sample efficiency and robustness using active learning and risk-averse reinforcement learning.
arXiv Detail & Related papers (2022-03-14T20:13:21Z) - Influence Tuning: Demoting Spurious Correlations via Instance
Attribution and Instance-Driven Updates [26.527311287924995]
influence tuning can help deconfounding the model from spurious patterns in data.
We show that in a controlled setup, influence tuning can help deconfounding the model from spurious patterns in data.
arXiv Detail & Related papers (2021-10-07T06:59:46Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.