Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs
- URL: http://arxiv.org/abs/2310.11689v2
- Date: Sat, 11 Nov 2023 19:29:42 GMT
- Title: Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs
- Authors: Jiefeng Chen, Jinsung Yoon, Sayna Ebrahimi, Sercan O Arik, Tomas
Pfister, Somesh Jha
- Abstract summary: We propose a novel framework for adaptation with self-evaluation to improve the selective prediction performance of large language models (LLMs)
We evaluate our method on a variety of question-answering (QA) datasets and show that it outperforms state-of-the-art selective prediction methods.
- Score: 56.526095828316386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have recently shown great advances in a variety
of tasks, including natural language understanding and generation. However,
their use in high-stakes decision-making scenarios is still limited due to the
potential for errors. Selective prediction is a technique that can be used to
improve the reliability of the LLMs by allowing them to abstain from making
predictions when they are unsure of the answer. In this work, we propose a
novel framework for adaptation with self-evaluation to improve the selective
prediction performance of LLMs. Our framework is based on the idea of using
parameter-efficient tuning to adapt the LLM to the specific task at hand while
improving its ability to perform self-evaluation. We evaluate our method on a
variety of question-answering (QA) datasets and show that it outperforms
state-of-the-art selective prediction methods. For example, on the CoQA
benchmark, our method improves the AUACC from 91.23% to 92.63% and improves the
AUROC from 74.61% to 80.25%.
Related papers
- Grade Score: Quantifying LLM Performance in Option Selection [0.0]
"Grade Score" is a novel metric designed to evaluate the consistency and fairness of Large Language Models (LLMs)
The Grade Score combines Entropy, which measures order bias, and Mode Frequency, which assesses choice stability.
The study explores techniques such as prompt engineering and option sampling strategies to optimize the Grade Score.
arXiv Detail & Related papers (2024-06-17T19:29:39Z) - Cycles of Thought: Measuring LLM Confidence through Stable Explanations [53.15438489398938]
Large language models (LLMs) can reach and even surpass human-level accuracy on a variety of benchmarks, but their overconfidence in incorrect responses is still a well-documented failure mode.
We propose a framework for measuring an LLM's uncertainty with respect to the distribution of generated explanations for an answer.
arXiv Detail & Related papers (2024-06-05T16:35:30Z) - Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning [55.96599486604344]
We introduce an approach aimed at enhancing the reasoning capabilities of Large Language Models (LLMs) through an iterative preference learning process.
We use Monte Carlo Tree Search (MCTS) to iteratively collect preference data, utilizing its look-ahead ability to break down instance-level rewards into more granular step-level signals.
The proposed algorithm employs Direct Preference Optimization (DPO) to update the LLM policy using this newly generated step-level preference data.
arXiv Detail & Related papers (2024-05-01T11:10:24Z) - Self-Evaluation Improves Selective Generation in Large Language Models [54.003992911447696]
We reformulate open-ended generation tasks into token-level prediction tasks.
We instruct an LLM to self-evaluate its answers.
We benchmark a range of scoring methods based on self-evaluation.
arXiv Detail & Related papers (2023-12-14T19:09:22Z) - Which Examples to Annotate for In-Context Learning? Towards Effective
and Efficient Selection [35.924633625147365]
Large Language Models (LLMs) can adapt to new tasks via in-context learning (ICL)
In this work, we investigate an active learning approach for ICL, where there is a limited budget for annotating examples.
We propose a model-adaptive optimization-free algorithm, termed AdaICL, which identifies examples that the model is uncertain about.
arXiv Detail & Related papers (2023-10-30T22:03:55Z) - Improving Selective Visual Question Answering by Learning from Your
Peers [74.20167944693424]
Visual Question Answering (VQA) models can have difficulties abstaining from answering when they are wrong.
We propose Learning from Your Peers (LYP) approach for training multimodal selection functions for making abstention decisions.
Our approach uses predictions from models trained on distinct subsets of the training data as targets for optimizing a Selective VQA model.
arXiv Detail & Related papers (2023-06-14T21:22:01Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Towards Improving Selective Prediction Ability of NLP Systems [24.774450633678125]
We propose a method that improves probability estimates of models by calibrating them using prediction confidence and difficulty score of instances.
We instantiate our method with Natural Language Inference (NLI) and Duplicate Detection (DD) tasks and evaluate it in both In-Domain (IID) and Out-of-Domain (OOD) settings.
arXiv Detail & Related papers (2020-08-21T08:46:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.