Accommodate Knowledge Conflicts in Retrieval-augmented LLMs: Towards Reliable Response Generation in the Wild
- URL: http://arxiv.org/abs/2504.12982v1
- Date: Thu, 17 Apr 2025 14:40:31 GMT
- Title: Accommodate Knowledge Conflicts in Retrieval-augmented LLMs: Towards Reliable Response Generation in the Wild
- Authors: Jiatai Wang, Zhiwei Xu, Di Jin, Xuewen Yang, Tao Li,
- Abstract summary: Large language models (LLMs) have advanced information retrieval systems.<n>LLMs often face knowledge conflicts between internal memory and retrievaled external information.<n>We propose Swin-VIB, a novel framework that integrates a pipeline of variational information bottleneck models into adaptive augmentation of retrieved information.
- Score: 11.058848731627233
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of large language models (LLMs) has significantly advanced information retrieval systems, particularly in response generation (RG). Unfortunately, LLMs often face knowledge conflicts between internal memory and retrievaled external information, arising from misinformation, biases, or outdated knowledge. These conflicts undermine response reliability and introduce uncertainty in decision-making. In this work, we analyze how LLMs navigate knowledge conflicts from an information-theoretic perspective and reveal that when conflicting and supplementary information exhibit significant differences, LLMs confidently resolve their preferences. However, when the distinction is ambiguous, LLMs experience heightened uncertainty. Based on this insight, we propose Swin-VIB, a novel framework that integrates a pipeline of variational information bottleneck models into adaptive augmentation of retrieved information and guiding LLM preference in response generation. Extensive experiments on single-choice, open-ended question-answering (QA), and retrieval augmented generation (RAG) validate our theoretical findings and demonstrate the efficacy of Swin-VIB. Notably, our method improves single-choice task accuracy by at least 7.54\% over competitive baselines.
Related papers
- How does Misinformation Affect Large Language Model Behaviors and Preferences? [37.06385727015972]
Large Language Models (LLMs) have shown remarkable capabilities in knowledge-intensive tasks.<n>We present MisBench, the current largest and most comprehensive benchmark for evaluating LLMs' behavior and knowledge preference toward misinformation.<n> Empirical results reveal that while LLMs demonstrate comparable abilities in discerning misinformation, they still remain susceptible to knowledge conflicts and stylistic variations.
arXiv Detail & Related papers (2025-05-27T17:57:44Z) - How to Protect Yourself from 5G Radiation? Investigating LLM Responses to Implicit Misinformation [24.355564722047244]
Large Language Models (LLMs) are widely deployed in diverse scenarios.
The extent to which they could tacitly spread misinformation emerges as a critical safety concern.
We curated ECHOMIST, the first benchmark for implicit misinformation.
arXiv Detail & Related papers (2025-03-12T17:59:18Z) - PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric Pruning [92.07119924043461]
Knowledge-Augmented Generation (KAG) has shown great promise in updating the internal memory of Large Language Models (LLMs)<n>Current approaches to mitigating these conflicts mainly focus on improving external knowledge utilization.<n>We propose a ParametrIc Pruning-based Knowledge-Augmented Generation (PIP-KAG) approach, which prunes internal knowledge of LLMs.
arXiv Detail & Related papers (2025-02-21T15:50:41Z) - Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies [66.30619782227173]
Large language models (LLMs) can produce erroneous responses that sound fluent and convincing.
We identify several features of LLM responses that shape users' reliance.
We find that explanations increase reliance on both correct and incorrect responses.
We observe less reliance on incorrect responses when sources are provided or when explanations exhibit inconsistencies.
arXiv Detail & Related papers (2025-02-12T16:35:41Z) - Exploring Knowledge Boundaries in Large Language Models for Retrieval Judgment [56.87031484108484]
Large Language Models (LLMs) are increasingly recognized for their practical applications.
Retrieval-Augmented Generation (RAG) tackles this challenge and has shown a significant impact on LLMs.
By minimizing retrieval requests that yield neutral or harmful results, we can effectively reduce both time and computational costs.
arXiv Detail & Related papers (2024-11-09T15:12:28Z) - Analysing the Residual Stream of Language Models Under Knowledge Conflicts [23.96385393039587]
Large language models (LLMs) can store a significant amount of factual knowledge in their parameters.<n>However, their parametric knowledge may conflict with the information provided in the context.<n>This can lead to undesirable model behaviour, such as reliance on outdated or incorrect information.
arXiv Detail & Related papers (2024-10-21T15:12:51Z) - Astute RAG: Overcoming Imperfect Retrieval Augmentation and Knowledge Conflicts for Large Language Models [20.605487145370752]
We find that imperfect retrieval augmentation might be inevitable and quite harmful, through controlled analysis under realistic conditions.
We propose Astute RAG, a novel RAG approach that adaptively elicits essential information from LLMs' internal knowledge.
Further analysis reveals that Astute RAG effectively resolves knowledge conflicts, improving the reliability and trustworthiness of RAG systems.
arXiv Detail & Related papers (2024-10-09T17:59:58Z) - Understanding the Relationship between Prompts and Response Uncertainty in Large Language Models [55.332004960574004]
Large language models (LLMs) are widely used in decision-making, but their reliability, especially in critical tasks like healthcare, is not well-established.<n>This paper investigates how the uncertainty of responses generated by LLMs relates to the information provided in the input prompt.<n>We propose a prompt-response concept model that explains how LLMs generate responses and helps understand the relationship between prompts and response uncertainty.
arXiv Detail & Related papers (2024-07-20T11:19:58Z) - Rejection Improves Reliability: Training LLMs to Refuse Unknown Questions Using RL from Knowledge Feedback [14.120154004011084]
Large Language Models (LLMs) often generate erroneous outputs, known as hallucinations.
We present a novel alignment framework called Reinforcement Learning from Knowledge Feedback (RLKF)
arXiv Detail & Related papers (2024-03-27T08:39:56Z) - LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation [58.524237916836164]
We propose LEMMA: LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation.
Our method improves the accuracy over the top baseline LVLM by 7% and 13% on Twitter and Fakeddit datasets respectively.
arXiv Detail & Related papers (2024-02-19T08:32:27Z) - Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs [0.5461938536945721]
Large language models (LLMs) encapsulate a vast amount of factual information within their pre-trained weights.
This knowledge is inherently limited, relying heavily on the characteristics of the training data.
We compare two common approaches: unsupervised fine-tuning and retrieval-augmented generation.
arXiv Detail & Related papers (2023-12-10T16:52:00Z) - RECALL: A Benchmark for LLMs Robustness against External Counterfactual
Knowledge [69.79676144482792]
This study aims to evaluate the ability of LLMs to distinguish reliable information from external knowledge.
Our benchmark consists of two tasks, Question Answering and Text Generation, and for each task, we provide models with a context containing counterfactual information.
arXiv Detail & Related papers (2023-11-14T13:24:19Z) - Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation [109.8527403904657]
We show that large language models (LLMs) possess unwavering confidence in their knowledge and cannot handle the conflict between internal and external knowledge well.
Retrieval augmentation proves to be an effective approach in enhancing LLMs' awareness of knowledge boundaries.
We propose a simple method to dynamically utilize supporting documents with our judgement strategy.
arXiv Detail & Related papers (2023-07-20T16:46:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.