Augmenting LLMs with Knowledge: A survey on hallucination prevention
- URL: http://arxiv.org/abs/2309.16459v1
- Date: Thu, 28 Sep 2023 14:09:58 GMT
- Title: Augmenting LLMs with Knowledge: A survey on hallucination prevention
- Authors: Konstantinos Andriopoulos, Johan Pouwelse
- Abstract summary: This survey delves into the realm of language models (LMs) augmented with the ability to tap into external knowledge sources.
While adhering to the standard objective of predicting missing tokens, these augmented LMs leverage diverse, possibly non-parametric external modules.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large pre-trained language models have demonstrated their proficiency in
storing factual knowledge within their parameters and achieving remarkable
results when fine-tuned for downstream natural language processing tasks.
Nonetheless, their capacity to access and manipulate knowledge with precision
remains constrained, resulting in performance disparities on
knowledge-intensive tasks when compared to task-specific architectures.
Additionally, the challenges of providing provenance for model decisions and
maintaining up-to-date world knowledge persist as open research frontiers. To
address these limitations, the integration of pre-trained models with
differentiable access mechanisms to explicit non-parametric memory emerges as a
promising solution. This survey delves into the realm of language models (LMs)
augmented with the ability to tap into external knowledge sources, including
external knowledge bases and search engines. While adhering to the standard
objective of predicting missing tokens, these augmented LMs leverage diverse,
possibly non-parametric external modules to augment their contextual processing
capabilities, departing from the conventional language modeling paradigm.
Through an exploration of current advancements in augmenting large language
models with knowledge, this work concludes that this emerging research
direction holds the potential to address prevalent issues in traditional LMs,
such as hallucinations, un-grounded responses, and scalability challenges.
Related papers
- Large Language Models are Limited in Out-of-Context Knowledge Reasoning [65.72847298578071]
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning.
This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge.
arXiv Detail & Related papers (2024-06-11T15:58:59Z) - Evaluating the External and Parametric Knowledge Fusion of Large Language Models [72.40026897037814]
We develop a systematic pipeline for data construction and knowledge infusion to simulate knowledge fusion scenarios.
Our investigation reveals that enhancing parametric knowledge within LLMs can significantly bolster their capability for knowledge integration.
Our findings aim to steer future explorations on harmonizing external and parametric knowledge within LLMs.
arXiv Detail & Related papers (2024-05-29T11:48:27Z) - Quantitative knowledge retrieval from large language models [4.155711233354597]
Large language models (LLMs) have been extensively studied for their abilities to generate convincing natural language sequences.
This paper explores the feasibility of LLMs as a mechanism for quantitative knowledge retrieval to aid data analysis tasks.
arXiv Detail & Related papers (2024-02-12T16:32:37Z) - A Comprehensive Study of Knowledge Editing for Large Language Models [82.65729336401027]
Large Language Models (LLMs) have shown extraordinary capabilities in understanding and generating text that closely mirrors human communication.
This paper defines the knowledge editing problem and provides a comprehensive review of cutting-edge approaches.
We introduce a new benchmark, KnowEdit, for a comprehensive empirical evaluation of representative knowledge editing approaches.
arXiv Detail & Related papers (2024-01-02T16:54:58Z) - Data Distribution Bottlenecks in Grounding Language Models to Knowledge
Bases [9.610231090476857]
Language models (LMs) have already demonstrated remarkable abilities in understanding and generating both natural and formal language.
This paper is an experimental investigation aimed at uncovering the challenges that LMs encounter when tasked with knowledge base question answering (KBQA)
Our experiments reveal that even when employed with our proposed data augmentation techniques, advanced small and large language models exhibit poor performance in various dimensions.
arXiv Detail & Related papers (2023-09-15T12:06:45Z) - Exploiting Language Models as a Source of Knowledge for Cognitive Agents [4.557963624437782]
Large language models (LLMs) provide capabilities far beyond sentence completion, including question answering, summarization, and natural-language inference.
While many of these capabilities have potential application to cognitive systems, our research is exploiting language models as a source of task knowledge for cognitive agents, that is, agents realized via a cognitive architecture.
arXiv Detail & Related papers (2023-09-05T15:18:04Z) - ExpeL: LLM Agents Are Experiential Learners [60.54312035818746]
We introduce the Experiential Learning (ExpeL) agent to allow learning from agent experiences without requiring parametric updates.
Our agent autonomously gathers experiences and extracts knowledge using natural language from a collection of training tasks.
At inference, the agent recalls its extracted insights and past experiences to make informed decisions.
arXiv Detail & Related papers (2023-08-20T03:03:34Z) - Thrust: Adaptively Propels Large Language Models with External Knowledge [58.72867916604562]
Large-scale pre-trained language models (PTLMs) are shown to encode rich knowledge in their model parameters.
The inherent knowledge in PTLMs can be opaque or static, making external knowledge necessary.
We propose the instance-level adaptive propulsion of external knowledge (IAPEK), where we only conduct the retrieval when necessary.
arXiv Detail & Related papers (2023-07-19T20:16:46Z) - Knowledge Rumination for Pre-trained Language Models [77.55888291165462]
We propose a new paradigm dubbed Knowledge Rumination to help the pre-trained language model utilize related latent knowledge without retrieving it from the external corpus.
We apply the proposed knowledge rumination to various language models, including RoBERTa, DeBERTa, and GPT-3.
arXiv Detail & Related papers (2023-05-15T15:47:09Z) - LM-CORE: Language Models with Contextually Relevant External Knowledge [13.451001884972033]
We argue that storing large amounts of knowledge in the model parameters is sub-optimal given the ever-growing amounts of knowledge and resource requirements.
We present LM-CORE -- a general framework to achieve this -- that allows textitdecoupling of the language model training from the external knowledge source.
Experimental results show that LM-CORE, having access to external knowledge, achieves significant and robust outperformance over state-of-the-art knowledge-enhanced language models on knowledge probing tasks.
arXiv Detail & Related papers (2022-08-12T18:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.