"Merge Conflicts!" Exploring the Impacts of External Distractors to
Parametric Knowledge Graphs
- URL: http://arxiv.org/abs/2309.08594v1
- Date: Fri, 15 Sep 2023 17:47:59 GMT
- Title: "Merge Conflicts!" Exploring the Impacts of External Distractors to
Parametric Knowledge Graphs
- Authors: Cheng Qian, Xinran Zhao, Sherry Tongshuang Wu
- Abstract summary: Large language models (LLMs) acquire extensive knowledge during pre-training, known as their parametric knowledge.
LLMs inevitably require external knowledge during their interactions with users.
This raises a crucial question: How will LLMs respond when external knowledge interferes with their parametric knowledge?
- Score: 15.660128743249611
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) acquire extensive knowledge during pre-training,
known as their parametric knowledge. However, in order to remain up-to-date and
align with human instructions, LLMs inevitably require external knowledge
during their interactions with users. This raises a crucial question: How will
LLMs respond when external knowledge interferes with their parametric
knowledge? To investigate this question, we propose a framework that
systematically elicits LLM parametric knowledge and introduces external
knowledge. Specifically, we uncover the impacts by constructing a parametric
knowledge graph to reveal the different knowledge structures of LLMs, and
introduce external knowledge through distractors of varying degrees, methods,
positions, and formats. Our experiments on both black-box and open-source
models demonstrate that LLMs tend to produce responses that deviate from their
parametric knowledge, particularly when they encounter direct conflicts or
confounding changes of information within detailed contexts. We also find that
while LLMs are sensitive to the veracity of external knowledge, they can still
be distracted by unrelated information. These findings highlight the risk of
hallucination when integrating external knowledge, even indirectly, during
interactions with current LLMs. All the data and results are publicly
available.
Related papers
- Understanding the Interplay between Parametric and Contextual Knowledge for Large Language Models [85.13298925375692]
Large language models (LLMs) encode vast amounts of knowledge during pre-training.
LLMs can be enhanced by incorporating contextual knowledge (CK)
Can LLMs effectively integrate their internal PK with external CK to solve complex problems?
arXiv Detail & Related papers (2024-10-10T23:09:08Z) - Internal and External Knowledge Interactive Refinement Framework for Knowledge-Intensive Question Answering [33.89176174108559]
We propose a new internal and external knowledge interactive refinement paradigm dubbed IEKR.
By simply adding a prompt like 'Tell me something about' to the LLMs, we try to review related explicit knowledge and insert them with the query into the retriever for external retrieval.
arXiv Detail & Related papers (2024-08-23T10:52:57Z) - Evaluating the External and Parametric Knowledge Fusion of Large Language Models [72.40026897037814]
We develop a systematic pipeline for data construction and knowledge infusion to simulate knowledge fusion scenarios.
Our investigation reveals that enhancing parametric knowledge within LLMs can significantly bolster their capability for knowledge integration.
Our findings aim to steer future explorations on harmonizing external and parametric knowledge within LLMs.
arXiv Detail & Related papers (2024-05-29T11:48:27Z) - LLMs' Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements [59.71218039095155]
Task of reading comprehension (RC) provides a primary means to assess language models' natural language understanding (NLU) capabilities.
If the context aligns with the models' internal knowledge, it is hard to discern whether the models' answers stem from context comprehension or from internal information.
To address this issue, we suggest to use RC on imaginary data, based on fictitious facts and entities.
arXiv Detail & Related papers (2024-04-09T13:08:56Z) - Untangle the KNOT: Interweaving Conflicting Knowledge and Reasoning Skills in Large Language Models [51.72963030032491]
Knowledge documents for large language models (LLMs) may conflict with the memory of LLMs due to outdated or incorrect knowledge.
We construct a new dataset, dubbed KNOT, for knowledge conflict resolution examination in the form of question answering.
arXiv Detail & Related papers (2024-04-04T16:40:11Z) - RECALL: A Benchmark for LLMs Robustness against External Counterfactual
Knowledge [69.79676144482792]
This study aims to evaluate the ability of LLMs to distinguish reliable information from external knowledge.
Our benchmark consists of two tasks, Question Answering and Text Generation, and for each task, we provide models with a context containing counterfactual information.
arXiv Detail & Related papers (2023-11-14T13:24:19Z) - Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation [109.8527403904657]
We show that large language models (LLMs) possess unwavering confidence in their knowledge and cannot handle the conflict between internal and external knowledge well.
Retrieval augmentation proves to be an effective approach in enhancing LLMs' awareness of knowledge boundaries.
We propose a simple method to dynamically utilize supporting documents with our judgement strategy.
arXiv Detail & Related papers (2023-07-20T16:46:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.