When Seeing Overrides Knowing: Disentangling Knowledge Conflicts in Vision-Language Models
- URL: http://arxiv.org/abs/2507.13868v1
- Date: Fri, 18 Jul 2025 12:42:30 GMT
- Title: When Seeing Overrides Knowing: Disentangling Knowledge Conflicts in Vision-Language Models
- Authors: Francesco Ortu, Zhijing Jin, Diego Doimo, Alberto Cazzaniga,
- Abstract summary: We analyze the mechanisms that Vision-language models (VLMs) use to resolve cross-modal conflicts.<n>We localize with logit inspection a small set of heads that control the conflict.<n>We show that attention from such heads pinpoints localized image regions driving visual overrides, outperforming gradient-based attribution in precision.
- Score: 13.390492503308792
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Vision-language models (VLMs) increasingly leverage diverse knowledge sources to address complex tasks, often encountering conflicts between their internal parametric knowledge and external information. Knowledge conflicts can result in hallucinations and unreliable responses, but the mechanisms governing such interactions remain unknown. To address this gap, we analyze the mechanisms that VLMs use to resolve cross-modal conflicts by introducing a dataset of multimodal counterfactual queries that deliberately contradict internal commonsense knowledge. We localize with logit inspection a small set of heads that control the conflict. Moreover, by modifying these heads, we can steer the model towards its internal knowledge or the visual inputs. Finally, we show that attention from such heads pinpoints localized image regions driving visual overrides, outperforming gradient-based attribution in precision.
Related papers
- MAGIC: A Multi-Hop and Graph-Based Benchmark for Inter-Context Conflicts in Retrieval-Augmented Generation [4.177310099979434]
Knowledge conflict often arises in RAG systems, where retrieved documents may be inconsistent with one another or contradict the model's parametric knowledge.<n>We propose a knowledge graph (KG)-based framework that generates varied and subtle conflicts between two similar yet distinct contexts.<n> Experimental results on our benchmark, MAGIC, provide intriguing insights into the inner workings of LLMs regarding knowledge conflict.
arXiv Detail & Related papers (2025-07-29T07:19:49Z) - Robust Multimodal Large Language Models Against Modality Conflict [94.12341487880465]
multimodal large language models (MLLMs) are prone to hallucinations in real-world scenarios.<n>We study the inherent conflicts in inputs from different modalities that place MLLMs in a dilemma and directly lead to hallucinations.<n>Three methods are proposed to alleviate the hallucination caused by modality conflict.
arXiv Detail & Related papers (2025-07-09T11:18:38Z) - Benchmarking Multimodal Knowledge Conflict for Large Multimodal Models [23.37800506729006]
We propose MMKC-Bench, a benchmark to evaluate factual knowledge conflicts in both context-memory and inter-context scenarios.<n> MMKC-Bench includes 1,573 knowledge instances and 3,381 images across 23 broad types, collected through automated pipelines with human verification.<n>Our findings show that while current LMMs are capable of recognizing knowledge conflicts, they tend to favor internal parametric knowledge over external evidence.
arXiv Detail & Related papers (2025-05-26T04:39:30Z) - Conflicts in Texts: Data, Implications and Challenges [58.03478157713084]
Conflicts could reflect the complexity of situations, changes that need to be explained and dealt with, difficulties in data annotation, and mistakes in generated outputs.<n>This survey categorizes these conflicts into three key areas: (1) natural texts on the web, where factual inconsistencies, subjective biases, and multiple perspectives introduce contradictions; (2) human-annotated data, where annotator disagreements, mistakes, and societal biases impact model training; and (3) model interactions, where hallucinations and knowledge conflicts emerge during deployment.<n>We highlight key challenges and future directions for developing conflict-aware NLP systems that can reason over and reconcile conflicting information more effectively
arXiv Detail & Related papers (2025-04-28T04:24:01Z) - SegSub: Evaluating Robustness to Knowledge Conflicts and Hallucinations in Vision-Language Models [6.52323086990482]
Vision language models (VLM) demonstrate sophisticated multimodal reasoning yet are prone to hallucination when confronted with knowledge conflicts.<n>This research introduces segsub, a framework for applying targeted image perturbations to investigate VLM resilience against knowledge conflicts.
arXiv Detail & Related papers (2025-02-19T00:26:38Z) - Analysing the Residual Stream of Language Models Under Knowledge Conflicts [23.96385393039587]
Large language models (LLMs) can store a significant amount of factual knowledge in their parameters.<n>However, their parametric knowledge may conflict with the information provided in the context.<n>This can lead to undesirable model behaviour, such as reliance on outdated or incorrect information.
arXiv Detail & Related papers (2024-10-21T15:12:51Z) - Insight Over Sight: Exploring the Vision-Knowledge Conflicts in Multimodal LLMs [55.74117540987519]
This paper explores the problem of commonsense level vision-knowledge conflict in Multimodal Large Language Models (MLLMs)<n>We introduce an automated framework, augmented with human-in-the-loop quality control, to generate inputs designed to simulate and evaluate these conflicts in MLLMs.<n>Using this framework, we have crafted a diagnostic benchmark consisting of 374 original images and 1,122 high-quality question-answer pairs.
arXiv Detail & Related papers (2024-10-10T17:31:17Z) - ECon: On the Detection and Resolution of Evidence Conflicts [56.89209046429291]
The rise of large language models (LLMs) has significantly influenced the quality of information in decision-making systems.
This study introduces a method for generating diverse, validated evidence conflicts to simulate real-world misinformation scenarios.
arXiv Detail & Related papers (2024-10-05T07:41:17Z) - LLMs' Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements [59.71218039095155]
Task of reading comprehension (RC) provides a primary means to assess language models' natural language understanding (NLU) capabilities.<n>If the context aligns with the models' internal knowledge, it is hard to discern whether the models' answers stem from context comprehension or from internal information.<n>To address this issue, we suggest to use RC on imaginary data, based on fictitious facts and entities.
arXiv Detail & Related papers (2024-04-09T13:08:56Z) - Cutting Off the Head Ends the Conflict: A Mechanism for Interpreting and
Mitigating Knowledge Conflicts in Language Models [18.2500350157507]
Internal memory and external context inevitably clash, leading to knowledge conflicts within language models (LMs)
We propose a novel method called PatH PatcHing (PH3), which can efficiently mitigate knowledge conflicts by pruning conflicting attention heads without updating model parameters.
arXiv Detail & Related papers (2024-02-28T08:34:41Z) - Resolving Knowledge Conflicts in Large Language Models [46.903549751371415]
Large language models (LLMs) often encounter knowledge conflicts.
We ask what are the desiderata for LLMs when a knowledge conflict arises and whether existing LLMs fulfill them.
We introduce an evaluation framework for simulating contextual knowledge conflicts.
arXiv Detail & Related papers (2023-10-02T06:57:45Z) - Visual Relationship Detection with Visual-Linguistic Knowledge from
Multimodal Representations [103.00383924074585]
Visual relationship detection aims to reason over relationships among salient objects in images.
We propose a novel approach named Visual-Linguistic Representations from Transformers (RVL-BERT)
RVL-BERT performs spatial reasoning with both visual and language commonsense knowledge learned via self-supervised pre-training.
arXiv Detail & Related papers (2020-09-10T16:15:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.