VISTA Score: Verification In Sequential Turn-based Assessment
- URL: http://arxiv.org/abs/2510.27052v2
- Date: Thu, 06 Nov 2025 17:44:55 GMT
- Title: VISTA Score: Verification In Sequential Turn-based Assessment
- Authors: Ashley Lewis, Andrew Perrault, Eric Fosler-Lussier, Michael White,
- Abstract summary: We introduce VISTA, a framework for evaluating conversational factuality through claim-level verification and sequential consistency tracking.<n> VISTA decomposes each assistant turn into atomic factual claims, verifies them against trusted sources and dialogue history, and categorizes unverifiable statements.<n>Human evaluation confirms that VISTA's decomposition improves annotator agreement and reveals inconsistencies in existing benchmarks.
- Score: 18.318681275086902
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hallucination--defined here as generating statements unsupported or contradicted by available evidence or conversational context--remains a major obstacle to deploying conversational AI systems in settings that demand factual reliability. Existing metrics either evaluate isolated responses or treat unverifiable content as errors, limiting their use for multi-turn dialogue. We introduce VISTA (Verification In Sequential Turn-based Assessment), a framework for evaluating conversational factuality through claim-level verification and sequential consistency tracking. VISTA decomposes each assistant turn into atomic factual claims, verifies them against trusted sources and dialogue history, and categorizes unverifiable statements (subjective, contradicted, lacking evidence, or abstaining). Across eight large language models and four dialogue factuality benchmarks (AIS, BEGIN, FAITHDIAL, and FADE), VISTA substantially improves hallucination detection over FACTSCORE and LLM-as-Judge baselines. Human evaluation confirms that VISTA's decomposition improves annotator agreement and reveals inconsistencies in existing benchmarks. By modeling factuality as a dynamic property of conversation, VISTA offers a more transparent, human-aligned measure of truthfulness in dialogue systems.
Related papers
- TRACE: A Framework for Analyzing and Enhancing Stepwise Reasoning in Vision-Language Models [9.607579442309639]
We introduce TRACE, a framework for Transparent Reasoning And Consistency Evaluation.<n>At its core, TRACEleverages Auxiliary Reasoning Sets to decompose complex problems.<n>Our experiments show that consistency across ARS correlates with final-answer correctness.<n>TRACE defines confidence regions that distinguish reliable from unreliable reasoning paths.
arXiv Detail & Related papers (2025-12-05T18:40:18Z) - From Fact to Judgment: Investigating the Impact of Task Framing on LLM Conviction in Dialogue Systems [8.8953040142657]
We investigate how an LLM's conviction is changed when a task is reframed from a direct factual query to a Conversational Judgment Task.<n>We apply pressure in the form of a simple rebuttal ("The previous answer is incorrect.") to both conditions.<n>Our findings show that while some models like GPT-4o-mini reveal sycophantic tendencies under social framing tasks, others like Llama-8B-Instruct become overly-critical.
arXiv Detail & Related papers (2025-11-14T00:55:28Z) - SVeritas: Benchmark for Robust Speaker Verification under Diverse Conditions [54.34001921326444]
Speaker verification (SV) models are increasingly integrated into security, personalization, and access control systems.<n>Existing benchmarks evaluate only subsets of these conditions, missing others entirely.<n>We introduce SVeritas, a comprehensive Speaker Verification tasks benchmark suite, assessing SV systems under stressors like recording duration, spontaneity, content, noise, microphone distance, reverberation, channel mismatches, audio bandwidth, codecs, speaker age, and susceptibility to spoofing and adversarial attacks.
arXiv Detail & Related papers (2025-09-21T14:11:16Z) - Say It Another Way: Auditing LLMs with a User-Grounded Automated Paraphrasing Framework [17.91981142492207]
We introduce AUGMENT, a framework for generating controlled paraphrases grounded in user behaviors.<n>AUGMENT leverages linguistically informed rules and enforces quality through checks on instruction adherence, semantic similarity, and realism.<n>Case studies show that controlled paraphrases uncover systematic weaknesses that remain obscured under unconstrained variation.
arXiv Detail & Related papers (2025-05-06T14:17:30Z) - Measuring the Robustness of Reference-Free Dialogue Evaluation Systems [12.332146893333952]
We present a benchmark for evaluating the robustness of reference-free dialogue metrics against four categories of adversarial attacks.<n>We analyze metrics such as DialogRPT, UniEval, and PromptEval across grounded and ungrounded datasets.
arXiv Detail & Related papers (2025-01-12T06:41:52Z) - FactLens: Benchmarking Fine-Grained Fact Verification [6.814173254027381]
We advocate for a shift towards fine-grained verification, where complex claims are broken down into smaller sub-claims for individual verification.<n>We introduce FactLens, a benchmark for evaluating fine-grained fact verification, with metrics and automated evaluators of sub-claim quality.<n>Our results show alignment between automated FactLens evaluators and human judgments, and we discuss the impact of sub-claim characteristics on the overall verification performance.
arXiv Detail & Related papers (2024-11-08T21:26:57Z) - VALOR-EVAL: Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models [57.43276586087863]
Large Vision-Language Models (LVLMs) suffer from hallucination issues, wherein the models generate plausible-sounding but factually incorrect outputs.
Existing benchmarks are often limited in scope, focusing mainly on object hallucinations.
We introduce a multi-dimensional benchmark covering objects, attributes, and relations, with challenging images selected based on associative biases.
arXiv Detail & Related papers (2024-04-22T04:49:22Z) - Context Does Matter: Implications for Crowdsourced Evaluation Labels in Task-Oriented Dialogue Systems [57.16442740983528]
Crowdsourced labels play a crucial role in evaluating task-oriented dialogue systems.
Previous studies suggest using only a portion of the dialogue context in the annotation process.
This study investigates the influence of dialogue context on annotation quality.
arXiv Detail & Related papers (2024-04-15T17:56:39Z) - FactCHD: Benchmarking Fact-Conflicting Hallucination Detection [64.4610684475899]
FactCHD is a benchmark designed for the detection of fact-conflicting hallucinations from LLMs.
FactCHD features a diverse dataset that spans various factuality patterns, including vanilla, multi-hop, comparison, and set operation.
We introduce Truth-Triangulator that synthesizes reflective considerations by tool-enhanced ChatGPT and LoRA-tuning based on Llama2.
arXiv Detail & Related papers (2023-10-18T16:27:49Z) - DynaEval: Unifying Turn and Dialogue Level Evaluation [60.66883575106898]
We propose DynaEval, a unified automatic evaluation framework.
It is capable of performing turn-level evaluation, but also holistically considers the quality of the entire dialogue.
Experiments show that DynaEval significantly outperforms the state-of-the-art dialogue coherence model.
arXiv Detail & Related papers (2021-06-02T12:23:18Z) - I like fish, especially dolphins: Addressing Contradictions in Dialogue
Modeling [104.09033240889106]
We introduce the DialoguE COntradiction DEtection task (DECODE) and a new conversational dataset containing both human-human and human-bot contradictory dialogues.
We then compare a structured utterance-based approach of using pre-trained Transformer models for contradiction detection with the typical unstructured approach.
arXiv Detail & Related papers (2020-12-24T18:47:49Z) - CoCo: Controllable Counterfactuals for Evaluating Dialogue State
Trackers [92.5628632009802]
We propose controllable counterfactuals (CoCo) to bridge the gap and evaluate dialogue state tracking (DST) models on novel scenarios.
CoCo generates novel conversation scenarios in two steps: (i) counterfactual goal generation at turn-level by dropping and adding slots followed by replacing slot values, and (ii) counterfactual conversation generation that is conditioned on (i) and consistent with the dialogue flow.
Human evaluations show that COCO-generated conversations perfectly reflect the underlying user goal with more than 95% accuracy and are as human-like as the original conversations.
arXiv Detail & Related papers (2020-10-24T09:39:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.