FACTS&EVIDENCE: An Interactive Tool for Transparent Fine-Grained Factual Verification of Machine-Generated Text
- URL: http://arxiv.org/abs/2503.14797v1
- Date: Wed, 19 Mar 2025 00:14:55 GMT
- Title: FACTS&EVIDENCE: An Interactive Tool for Transparent Fine-Grained Factual Verification of Machine-Generated Text
- Authors: Varich Boonsanong, Vidhisha Balachandran, Xiaochuang Han, Shangbin Feng, Lucy Lu Wang, Yulia Tsvetkov,
- Abstract summary: Facts&Evidence is an interactive tool for user-driven verification of complex text.<n>It visualizes the credibility of individual claims along with an explanation of model decisions and attribution to multiple, diverse evidence sources.
- Score: 39.804889153032526
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the widespread consumption of AI-generated content, there has been an increased focus on developing automated tools to verify the factual accuracy of such content. However, prior research and tools developed for fact verification treat it as a binary classification or a linear regression problem. Although this is a useful mechanism as part of automatic guardrails in systems, we argue that such tools lack transparency in the prediction reasoning and diversity in source evidence to provide a trustworthy user experience. We develop Facts&Evidence - an interactive and transparent tool for user-driven verification of complex text. The tool facilitates the intricate decision-making involved in fact-verification, presenting its users a breakdown of complex input texts to visualize the credibility of individual claims along with an explanation of model decisions and attribution to multiple, diverse evidence sources. Facts&Evidence aims to empower consumers of machine-generated text and give them agency to understand, verify, selectively trust and use such text.
Related papers
- Decoding the Digital Fine Print: Navigating the potholes in Terms of service/ use of GenAI tools against the emerging need for Transparent and Trustworthy Tech Futures [0.0]
The research investigates the crucial role of clear and intelligible terms of service in cultivating user trust and facilitating informed decision-making in the context of AI, in specific GenAI.
It highlights the obstacles presented by complex legal terminology and detailed fine print, which impede genuine user consent and recourse.
Findings indicate inconsistencies and variability in document quality, signaling a pressing demand for uniformity in disclosure practices.
arXiv Detail & Related papers (2024-03-26T04:54:53Z) - ClaimVer: Explainable Claim-Level Verification and Evidence Attribution of Text Through Knowledge Graphs [13.608282497568108]
ClaimVer is a human-centric framework tailored to meet users' informational and verification needs.
It highlights each claim, verifies it against a trusted knowledge graph, and provides succinct, clear explanations for each claim prediction.
arXiv Detail & Related papers (2024-03-12T17:07:53Z) - RELIC: Investigating Large Language Model Responses using Self-Consistency [58.63436505595177]
Large Language Models (LLMs) are notorious for blending fact with fiction and generating non-factual content, known as hallucinations.
We propose an interactive system that helps users gain insight into the reliability of the generated text.
arXiv Detail & Related papers (2023-11-28T14:55:52Z) - Synthetic Disinformation Attacks on Automated Fact Verification Systems [53.011635547834025]
We explore the sensitivity of automated fact-checkers to synthetic adversarial evidence in two simulated settings.
We show that these systems suffer significant performance drops against these attacks.
We discuss the growing threat of modern NLG systems as generators of disinformation.
arXiv Detail & Related papers (2022-02-18T19:01:01Z) - Scaling up Memory-Efficient Formal Verification Tools for Tree Ensembles [2.588973722689844]
We formalise and extend the VoTE algorithm presented earlier as a tool description.
We show how the separation of property checking from the core verification engine enables verification of versatile requirements.
We demonstrate the application of the tool in two case studies, namely digit recognition and aircraft collision avoidance.
arXiv Detail & Related papers (2021-05-06T11:50:22Z) - Evidence-Aware Inferential Text Generation with Vector Quantised
Variational AutoEncoder [104.25716317141321]
We propose an approach that automatically finds evidence for an event from a large text corpus, and leverages the evidence to guide the generation of inferential texts.
Our approach provides state-of-the-art performance on both Event2Mind and ATOMIC datasets.
arXiv Detail & Related papers (2020-06-15T02:59:52Z) - A Controllable Model of Grounded Response Generation [122.7121624884747]
Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process.
We propose a framework that we call controllable grounded response generation (CGRG)
We show that using this framework, a transformer based model with a novel inductive attention mechanism, trained on a conversation-like Reddit dataset, outperforms strong generation baselines.
arXiv Detail & Related papers (2020-05-01T21:22:08Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.