Groundedness in Retrieval-augmented Long-form Generation: An Empirical Study
- URL: http://arxiv.org/abs/2404.07060v1
- Date: Wed, 10 Apr 2024 14:50:10 GMT
- Title: Groundedness in Retrieval-augmented Long-form Generation: An Empirical Study
- Authors: Alessandro Stolfo,
- Abstract summary: We evaluate whether every generated sentence is grounded in retrieved documents or the model's pre-training data.
Across 3 datasets and 4 model families, our findings reveal that a significant fraction of generated sentences are consistently ungrounded.
Our results show that while larger models tend to ground their outputs more effectively, a significant portion of correct answers remains compromised by hallucinations.
- Score: 61.74571814707054
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an empirical study of groundedness in long-form question answering (LFQA) by retrieval-augmented large language models (LLMs). In particular, we evaluate whether every generated sentence is grounded in the retrieved documents or the model's pre-training data. Across 3 datasets and 4 model families, our findings reveal that a significant fraction of generated sentences are consistently ungrounded, even when those sentences contain correct ground-truth answers. Additionally, we examine the impacts of factors such as model size, decoding strategy, and instruction tuning on groundedness. Our results show that while larger models tend to ground their outputs more effectively, a significant portion of correct answers remains compromised by hallucinations. This study provides novel insights into the groundedness challenges in LFQA and underscores the necessity for more robust mechanisms in LLMs to mitigate the generation of ungrounded content.
Related papers
- Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training [39.21885486667879]
Large Language Models (LLMs) exhibit substantial capabilities yet encounter challenges, including hallucination, outdated knowledge, and untraceable reasoning processes.
Retrieval-augmented generation (RAG) has emerged as a promising solution, integrating knowledge from external databases to mitigate these challenges.
We propose a novel RAG approach known as Retrieval-augmented Adaptive Adrial Training (RAAT)
arXiv Detail & Related papers (2024-05-31T16:24:53Z) - Evaluating Consistency and Reasoning Capabilities of Large Language Models [0.0]
Large Language Models (LLMs) are extensively used today across various sectors, including academia, research, business, and finance.
Despite their widespread adoption, these models often produce incorrect and misleading information, exhibiting a tendency to hallucinate.
This paper aims to evaluate and compare the consistency and reasoning capabilities of both public and proprietary LLMs.
arXiv Detail & Related papers (2024-04-25T10:03:14Z) - Regressive Side Effects of Training Language Models to Mimic Student Misconceptions [25.90420385230675]
We highlight the problem that as Large Language Models are trained to more accurately mimic student misconceptions, there is a compromise in the factual integrity and reasoning ability of the models.
To combat these side effects, we introduced a "hallucination token" technique. This token, appended at the beginning of each student response during training, instructs the model to switch between mimicking student misconceptions and providing factually accurate responses.
arXiv Detail & Related papers (2024-04-23T15:57:55Z) - Reframing Offline Reinforcement Learning as a Regression Problem [0.0]
The study proposes the reformulation of offline reinforcement learning as a regression problem that can be solved with decision trees.
We observe that with gradient-boosted trees, the agent training and inference are very fast.
Despite the simplification inherent in this reformulated problem, our agent demonstrates performance that is at least on par with established methods.
arXiv Detail & Related papers (2024-01-21T23:50:46Z) - A Glitch in the Matrix? Locating and Detecting Language Model Grounding with Fakepedia [57.31074448586854]
Large language models (LLMs) have an impressive ability to draw on novel information supplied in their context.
Yet the mechanisms underlying this contextual grounding remain unknown.
We present a novel method to study grounding abilities using Fakepedia.
arXiv Detail & Related papers (2023-12-04T17:35:42Z) - How Well Do Large Language Models Truly Ground? [39.39062385290276]
A common method is to generate responses by grounding on external contexts given as input, known as knowledge-augmented models.
Previous research often narrowly defines "grounding" as just having the correct answer, which does not ensure the reliability of the entire response.
We propose a stricter definition of grounding: a model is truly grounded if it (1) fully utilizes the necessary knowledge from the provided context, and (2) stays within the limits of that knowledge.
arXiv Detail & Related papers (2023-11-15T16:11:27Z) - Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting [64.80538055623842]
sociodemographic prompting is a technique that steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give.
We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks.
arXiv Detail & Related papers (2023-09-13T15:42:06Z) - Benchmarking Large Language Models in Retrieval-Augmented Generation [53.504471079548]
We systematically investigate the impact of Retrieval-Augmented Generation on large language models.
We analyze the performance of different large language models in 4 fundamental abilities required for RAG.
We establish Retrieval-Augmented Generation Benchmark (RGB), a new corpus for RAG evaluation in both English and Chinese.
arXiv Detail & Related papers (2023-09-04T08:28:44Z) - "According to ...": Prompting Language Models Improves Quoting from
Pre-Training Data [52.03853726206584]
Large Language Models (LLMs) may hallucinate and generate fake information, despite pre-training on factual data.
We propose according-to prompting: directing LLMs to ground responses against previously observed text.
To quantify this grounding, we propose a novel evaluation metric (QUIP-Score) that measures the extent to which model-produced answers are directly found in underlying text corpora.
arXiv Detail & Related papers (2023-05-22T17:25:24Z) - Evaluating Factuality in Generation with Dependency-level Entailment [57.5316011554622]
We propose a new formulation of entailment that decomposes it at the level of dependency arcs.
We show that our dependency arc entailment model trained on this data can identify factual inconsistencies in paraphrasing and summarization better than sentence-level methods.
arXiv Detail & Related papers (2020-10-12T06:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.