I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token
- URL: http://arxiv.org/abs/2412.06676v1
- Date: Mon, 09 Dec 2024 17:13:20 GMT
- Title: I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token
- Authors: Roi Cohen, Konstantin Dobler, Eden Biran, Gerard de Melo,
- Abstract summary: Large Language Models are prone to hallucinations, causing them to emit unwanted and factually incorrect text.
We propose a novel calibration method that can be used to combat hallucinations.
We find that models trained with our method are able to express uncertainty in places where they would previously make mistakes.
- Score: 23.02504739114444
- License:
- Abstract: Large Language Models are known to capture real-world knowledge, allowing them to excel in many downstream tasks. Despite recent advances, these models are still prone to what are commonly known as hallucinations, causing them to emit unwanted and factually incorrect text. In this work, we propose a novel calibration method that can be used to combat hallucinations. We add a special [IDK] ("I don't know") token to the model's vocabulary and introduce an objective function that shifts probability mass to the [IDK] token for incorrect predictions. This approach allows the model to express uncertainty in its output explicitly. We evaluate our proposed method across multiple model architectures and factual downstream tasks. We find that models trained with our method are able to express uncertainty in places where they would previously make mistakes while suffering only a small loss of encoded knowledge. We further perform extensive ablation studies of multiple variations of our approach and provide a detailed analysis of the precision-recall tradeoff of our method.
Related papers
- Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification [116.77055746066375]
Large language models (LLMs) are notorious for hallucinating, i.e., producing erroneous claims in their output.
We propose a novel fact-checking and hallucination detection pipeline based on token-level uncertainty quantification.
arXiv Detail & Related papers (2024-03-07T17:44:17Z) - Distinguishing the Knowable from the Unknowable with Language Models [15.471748481627143]
In the absence of ground-truth probabilities, we explore a setting where, in order to disentangle a given uncertainty, a significantly larger model stands in as a proxy for the ground truth.
We show that small linear probes trained on the embeddings of frozen, pretrained models accurately predict when larger models will be more confident at the token level.
We propose a fully unsupervised method that achieves non-trivial accuracy on the same task.
arXiv Detail & Related papers (2024-02-05T22:22:49Z) - Fine-tuning Language Models for Factuality [96.5203774943198]
Large pre-trained language models (LLMs) have led to their widespread use, sometimes even as a replacement for traditional search engines.
Yet language models are prone to making convincing but factually inaccurate claims, often referred to as 'hallucinations'
In this work, we fine-tune language models to be more factual, without human labeling.
arXiv Detail & Related papers (2023-11-14T18:59:15Z) - Uncertainty-Aware Unlikelihood Learning Improves Generative Aspect
Sentiment Quad Prediction [52.05304897163256]
We propose a template-agnostic method to control the token-level generation.
Specifically, we introduce Monte Carlo dropout to understand the built-in uncertainty of pre-trained language models.
We further propose marginalized unlikelihood learning to suppress the uncertainty-aware mistake tokens.
arXiv Detail & Related papers (2023-06-01T07:49:06Z) - Discovering Latent Knowledge in Language Models Without Supervision [72.95136739040676]
Existing techniques for training language models can be misaligned with the truth.
We propose directly finding latent knowledge inside the internal activations of a language model in a purely unsupervised way.
We show that despite using no supervision and no model outputs, our method can recover diverse knowledge represented in large language models.
arXiv Detail & Related papers (2022-12-07T18:17:56Z) - Meaningfully Explaining a Model's Mistakes [16.521189362225996]
We propose a systematic approach, conceptual explanation scores (CES)
CES explains why a classifier makes a mistake on a particular test sample(s) in terms of human-understandable concepts.
We also train new models with intentional and known spurious correlations, which CES successfully identifies from a single misclassified test sample.
arXiv Detail & Related papers (2021-06-24T01:49:55Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - How Can We Know When Language Models Know? On the Calibration of
Language Models for Question Answering [80.82194311274694]
We examine the question "how can we know when language models know, with confidence, the answer to a particular query?"
We examine three strong generative models -- T5, BART, and GPT-2 -- and study whether their probabilities on QA tasks are well calibrated.
We then examine methods to calibrate such models to make their confidence scores correlate better with the likelihood of correctness.
arXiv Detail & Related papers (2020-12-02T03:53:13Z) - Learning Interpretable Models Using Uncertainty Oracles [12.879371384378164]
A desirable property of interpretable models is small size, so that they are easily understandable by humans.
This leads to the following challenges: (a) small sizes imply diminished accuracy, and (b) bespoke levers provided by model families to restrict size might be insufficient to reach the desired size-accuracy trade-off.
arXiv Detail & Related papers (2019-06-17T05:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.