Reading Between the Lines: Abstaining from VLM-Generated OCR Errors via Latent Representation Probes
- URL: http://arxiv.org/abs/2511.19806v1
- Date: Tue, 25 Nov 2025 00:24:42 GMT
- Title: Reading Between the Lines: Abstaining from VLM-Generated OCR Errors via Latent Representation Probes
- Authors: Jihan Yao, Achin Kulshrestha, Nathalie Rauschmayr, Reed Roberts, Banghua Zhu, Yulia Tsvetkov, Federico Tombari,
- Abstract summary: We propose Latent Representation Probing (LRP) to train lightweight probes on hidden states or attention patterns.<n>LRP improves abstention accuracy by 7.6% over best baselines.<n>This establishes a principled framework for building deployment-ready AI systems.
- Score: 79.36545159724703
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As VLMs are deployed in safety-critical applications, their ability to abstain from answering when uncertain becomes crucial for reliability, especially in Scene Text Visual Question Answering (STVQA) tasks. For example, OCR errors like misreading "50 mph" as "60 mph" could cause severe traffic accidents. This leads us to ask: Can VLMs know when they can't see? Existing abstention methods suggest pessimistic answers: they either rely on miscalibrated output probabilities or require semantic agreement unsuitable for OCR tasks. However, this failure may indicate we are looking in the wrong place: uncertainty signals could be hidden in VLMs' internal representations. Building on this insight, we propose Latent Representation Probing (LRP): training lightweight probes on hidden states or attention patterns. We explore three probe designs: concatenating representations across all layers, aggregating attention over visual tokens, and ensembling single layer probes by majority vote. Experiments on four benchmarks across image and video modalities show LRP improves abstention accuracy by 7.6\% over best baselines. Our analysis reveals: probes generalize across various uncertainty sources and datasets, and optimal signals emerge from intermediate rather than final layers. This establishes a principled framework for building deployment-ready AI systems by detecting confidence signals from internal states rather than unreliable outputs.
Related papers
- VLM-UQBench: A Benchmark for Modality-Specific and Cross-Modality Uncertainties in Vision Language Models [12.180198973471645]
We introduce VLM-UQBench, a benchmark for modality-specific and cross-modal data uncertainty in vision-language models (VLMs)<n>It consists of 600 real-world samples drawn from the VizWiz dataset, curated into clean, image-, text-, and cross-modal uncertainty subsets, and a scalable perturbation pipeline with 8 visual, 5 textual, and 3 cross-modal perturbations.
arXiv Detail & Related papers (2026-02-09T21:37:09Z) - Same Answer, Different Representations: Hidden instability in VLMs [65.36933543377346]
We introduce a representation-aware and frequency-aware evaluation framework that measures internal embedding drift, spectral sensitivity, and structural smoothness.<n>We apply this framework to modern Vision Language Models (VLMs) across the SEEDBench, MMMU, and POPE datasets.
arXiv Detail & Related papers (2026-02-06T12:24:26Z) - Knowing When to Answer: Adaptive Confidence Refinement for Reliable Audio-Visual Question Answering [15.39457034915546]
We present a formal problem formulation for textitReliable Audio-Visual Question Answering ($mathcalR$-AVQA), where we prefer abstention over answering incorrectly.<n>We propose Adaptive Confidence Refinement (ACR), a lightweight method to further enhance the performance of $mathcalR$-AVQA.
arXiv Detail & Related papers (2026-02-04T08:35:33Z) - DRIFT: Detecting Representational Inconsistencies for Factual Truthfulness [5.785021425715989]
LLMs often produce fluent but incorrect answers, yet detecting such hallucinations typically requires multiple sampling passes or post-hoc verification.<n>We propose a lightweight probe to read these signals directly from hidden states.<n>We develop an LLM router that answers confident queries immediately while delegating uncertain ones to stronger models.
arXiv Detail & Related papers (2026-01-20T18:16:10Z) - CARE What Fails: Contrastive Anchored-REflection for Verifiable Multimodal [84.71254539482369]
Group-relative reinforcement learning with verifiable rewards (RLVR) often wastes the most informative data it already has the failures.<n>We present CARE, a failure-centric post-training framework for multimodal reasoning that turns errors into supervision.<n> CARE improves accuracy and training smoothness while explicitly increasing the share of learning signal that comes from failures.
arXiv Detail & Related papers (2025-12-22T16:34:21Z) - Prune-Then-Plan: Step-Level Calibration for Stable Frontier Exploration in Embodied Question Answering [52.69447404069251]
Large vision-language models (VLMs) have improved embodied question answering (EQA) agents by providing strong semantic priors for open-vocabulary reasoning.<n>We propose Prune-Then-Plan, a framework that stabilizes exploration through step-level calibration.
arXiv Detail & Related papers (2025-11-24T22:50:50Z) - Seeing but Not Believing: Probing the Disconnect Between Visual Attention and Answer Correctness in VLMs [72.8370367403852]
Vision-Language Models (VLMs) achieve strong results on multimodal tasks such as visual question answering, yet they can still fail even when the correct visual evidence is present.<n>We show that shallow layers focus primarily on text, while deeper layers sparsely but reliably attend to localized evidence regions.<n>We introduce an inference-time intervention that highlights deep-layer evidence regions through selective attention-based masking.
arXiv Detail & Related papers (2025-10-20T17:31:09Z) - VOGUE: Guiding Exploration with Visual Uncertainty Improves Multimodal Reasoning [62.09195763860549]
Reinforcement learning with verifiable rewards (RLVR) improves reasoning in large language models (LLMs) but struggles with exploration.<n>We introduce $textbfVOGUE (Visual Uncertainty Guided Exploration)$, a novel method that shifts exploration from the output (text) to the input (visual) space.<n>Our work shows that grounding exploration in the inherent uncertainty of visual inputs is an effective strategy for improving multimodal reasoning.
arXiv Detail & Related papers (2025-10-01T20:32:08Z) - Can VLMs Recall Factual Associations From Visual References? [30.821053378797007]
We identify a systematic deficiency in the multimodal grounding of Vision Language Models (VLMs)<n>Forcing VLMs to rely on image representations of an entity halves their ability to recall factual knowledge.<n>We show that such linking failures are correlated with the expression of distinct patterns in model internal states.
arXiv Detail & Related papers (2025-08-22T16:47:37Z) - Consensus Entropy: Harnessing Multi-VLM Agreement for Self-Verifying and Self-Improving OCR [30.240680920617447]
We introduce Consensus Entropy (CE), a training-free post-inference method that quantifies OCR uncertainty.<n>We develop a lightweight multi-model framework that effectively identifies problematic samples, selects the best outputs and combines model strengths.
arXiv Detail & Related papers (2025-04-15T11:51:18Z) - Beyond Next Token Probabilities: Learnable, Fast Detection of Hallucinations and Data Contamination on LLM Output Distributions [60.43398881149664]
We introduce LOS-Net, a lightweight attention-based architecture trained on an efficient encoding of the LLM Output Signature.<n>It achieves superior performance across diverse benchmarks and LLMs, while maintaining extremely low detection latency.
arXiv Detail & Related papers (2025-03-18T09:04:37Z) - Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives [56.528835143531694]
We introduce DriveBench, a benchmark dataset designed to evaluate Vision-Language Models (VLMs)<n>Our findings reveal that VLMs often generate plausible responses derived from general knowledge or textual cues rather than true visual grounding.<n>We propose refined evaluation metrics that prioritize robust visual grounding and multi-modal understanding.
arXiv Detail & Related papers (2025-01-07T18:59:55Z) - Decompose and Compare Consistency: Measuring VLMs' Answer Reliability via Task-Decomposition Consistency Comparison [22.438863942925973]
We propose Decompose and Compare Consistency (DeCC) for reliability measurement.
By comparing the consistency between the direct answer generated using the VLM's internal reasoning process, DeCC measures the reliability of VLM's direct answer.
arXiv Detail & Related papers (2024-07-10T17:00:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.