A Survey of Confidence Estimation and Calibration in Large Language Models
- URL: http://arxiv.org/abs/2311.08298v2
- Date: Mon, 25 Mar 2024 06:01:49 GMT
- Title: A Survey of Confidence Estimation and Calibration in Large Language Models
- Authors: Jiahui Geng, Fengyu Cai, Yuxia Wang, Heinz Koeppl, Preslav Nakov, Iryna Gurevych,
- Abstract summary: Large language models (LLMs) have demonstrated remarkable capabilities across a wide range of tasks in various domains.
Despite their impressive performance, they can be unreliable due to factual errors in their generations.
Assessing their confidence and calibrating them across different tasks can help mitigate risks and enable LLMs to produce better generations.
- Score: 86.692994151323
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Large language models (LLMs) have demonstrated remarkable capabilities across a wide range of tasks in various domains. Despite their impressive performance, they can be unreliable due to factual errors in their generations. Assessing their confidence and calibrating them across different tasks can help mitigate risks and enable LLMs to produce better generations. There has been a lot of recent research aiming to address this, but there has been no comprehensive overview to organize it and outline the main lessons learned. The present survey aims to bridge this gap. In particular, we outline the challenges and we summarize recent technical advancements for LLM confidence estimation and calibration. We further discuss their applications and suggest promising directions for future work.
Related papers
- A Survey on LLM-as-a-Judge [10.257160590560824]
Large Language Models (LLMs) have achieved remarkable success across diverse domains.
LLMs present a compelling alternative to traditional expert-driven evaluations.
This paper addresses the core question: How can reliable LLM-as-a-Judge systems be built?
arXiv Detail & Related papers (2024-11-23T16:03:35Z) - Factuality of Large Language Models: A Survey [29.557596701431827]
We critically analyze existing work with the aim to identify the major challenges and their associated causes.
We analyze the obstacles to automated factuality evaluation for open-ended text generation.
arXiv Detail & Related papers (2024-02-04T09:36:31Z) - Examining Forgetting in Continual Pre-training of Aligned Large Language
Models [66.62800021628276]
We investigate the phenomenon of forgetting that occurs during continual pre-training on an existing fine-tuned LLM.
Experiment results highlight the non-trivial challenge of addressing catastrophic forgetting during continual pre-training.
arXiv Detail & Related papers (2024-01-06T05:34:09Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - Are Large Language Models Good Fact Checkers: A Preliminary Study [26.023148371263012]
Large Language Models (LLMs) have drawn significant attention due to their outstanding reasoning capabilities and extensive knowledge repository.
This study aims to comprehensively evaluate various LLMs in tackling specific fact-checking subtasks.
arXiv Detail & Related papers (2023-11-29T05:04:52Z) - Survey on Factuality in Large Language Models: Knowledge, Retrieval and
Domain-Specificity [61.54815512469125]
This survey addresses the crucial issue of factuality in Large Language Models (LLMs)
As LLMs find applications across diverse domains, the reliability and accuracy of their outputs become vital.
arXiv Detail & Related papers (2023-10-11T14:18:03Z) - Large Language Model Alignment: A Survey [42.03229317132863]
The potential of large language models (LLMs) is undeniably vast; however, they may yield texts that are imprecise, misleading, or even detrimental.
This survey endeavors to furnish an extensive exploration of alignment methodologies designed for LLMs.
We also probe into salient issues including the models' interpretability, and potential vulnerabilities to adversarial attacks.
arXiv Detail & Related papers (2023-09-26T15:49:23Z) - Are Large Language Models Really Robust to Word-Level Perturbations? [68.60618778027694]
We propose a novel rational evaluation approach that leverages pre-trained reward models as diagnostic tools.
Longer conversations manifest the comprehensive grasp of language models in terms of their proficiency in understanding questions.
Our results demonstrate that LLMs frequently exhibit vulnerability to word-level perturbations that are commonplace in daily language usage.
arXiv Detail & Related papers (2023-09-20T09:23:46Z) - Automatically Correcting Large Language Models: Surveying the landscape
of diverse self-correction strategies [104.32199881187607]
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tasks.
A promising approach to rectify these flaws is self-correction, where the LLM itself is prompted or guided to fix problems in its own output.
This paper presents a comprehensive review of this emerging class of techniques.
arXiv Detail & Related papers (2023-08-06T18:38:52Z) - Look Before You Leap: An Exploratory Study of Uncertainty Measurement
for Large Language Models [16.524794442035265]
We study the risk assessment of Large Language Models (LLMs) from the lens of uncertainty.
Our findings validate the effectiveness of uncertainty estimation for revealing LLMs' uncertain/non-factual predictions.
Insights from our study shed light on future design and development for reliable LLMs.
arXiv Detail & Related papers (2023-07-16T08:28:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.