Evaluating Transparency of Machine Generated Fact Checking Explanations
- URL: http://arxiv.org/abs/2406.12645v1
- Date: Tue, 18 Jun 2024 14:13:13 GMT
- Title: Evaluating Transparency of Machine Generated Fact Checking Explanations
- Authors: Rui Xing, Timothy Baldwin, Jey Han Lau,
- Abstract summary: We investigate the impact of human-curated vs. machine-selected evidence for explanation generation using large language models.
Surprisingly, we found that large language models generate similar or higher quality explanations using machine-selected evidence.
- Score: 48.776087871960584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An important factor when it comes to generating fact-checking explanations is the selection of evidence: intuitively, high-quality explanations can only be generated given the right evidence. In this work, we investigate the impact of human-curated vs. machine-selected evidence for explanation generation using large language models. To assess the quality of explanations, we focus on transparency (whether an explanation cites sources properly) and utility (whether an explanation is helpful in clarifying a claim). Surprisingly, we found that large language models generate similar or higher quality explanations using machine-selected evidence, suggesting carefully curated evidence (by humans) may not be necessary. That said, even with the best model, the generated explanations are not always faithful to the sources, suggesting further room for improvement in explanation generation for fact-checking.
Related papers
- Explainable Claim Verification via Knowledge-Grounded Reasoning with
Large Language Models [36.91218391728405]
This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK) Reasoning.
It can verify complex claims and generate explanations without the need for annotated evidence.
Our experiment results indicate that FOLK outperforms strong baselines on three datasets.
arXiv Detail & Related papers (2023-10-08T18:04:05Z) - Read it Twice: Towards Faithfully Interpretable Fact Verification by
Revisiting Evidence [59.81749318292707]
We propose a fact verification model named ReRead to retrieve evidence and verify claim.
The proposed system is able to achieve significant improvements upon best-reported models under different settings.
arXiv Detail & Related papers (2023-05-02T03:23:14Z) - ExClaim: Explainable Neural Claim Verification Using Rationalization [8.369720566612111]
ExClaim attempts to provide an explainable claim verification system with foundational evidence.
Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim.
Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes.
arXiv Detail & Related papers (2023-01-21T08:26:27Z) - Ask to Know More: Generating Counterfactual Explanations for Fake Claims [11.135087647482145]
We propose elucidating fact checking predictions using counterfactual explanations to help people understand why a piece of news was identified as fake.
In this work, generating counterfactual explanations for fake news involves three steps: asking good questions, finding contradictions, and reasoning appropriately.
Results suggest that the proposed approach generates the most helpful explanations compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T04:42:00Z) - Diagnostics-Guided Explanation Generation [32.97930902104502]
Explanations shed light on a machine learning model's rationales and can aid in identifying deficiencies in its reasoning process.
We show how to optimise for several diagnostic properties when training a model to generate sentence-level explanations.
arXiv Detail & Related papers (2021-09-08T16:27:52Z) - Are Training Resources Insufficient? Predict First Then Explain! [54.184609286094044]
We argue that the predict-then-explain (PtE) architecture is a more efficient approach in terms of the modelling perspective.
We show that the PtE structure is the most data-efficient approach when explanation data are lacking.
arXiv Detail & Related papers (2021-08-29T07:04:50Z) - ProoFVer: Natural Logic Theorem Proving for Fact Verification [24.61301908217728]
We propose ProoFVer, a proof system for fact verification using natural logic.
The generation of proofs makes ProoFVer an explainable system.
We find that humans correctly simulate ProoFVer's decisions more often using the proofs.
arXiv Detail & Related papers (2021-08-25T17:23:04Z) - Prompting Contrastive Explanations for Commonsense Reasoning Tasks [74.7346558082693]
Large pretrained language models (PLMs) can achieve near-human performance on commonsense reasoning tasks.
We show how to use these same models to generate human-interpretable evidence.
arXiv Detail & Related papers (2021-06-12T17:06:13Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - Are Visual Explanations Useful? A Case Study in Model-in-the-Loop
Prediction [49.254162397086006]
We study explanations based on visual saliency in an image-based age prediction task.
We find that presenting model predictions improves human accuracy.
However, explanations of various kinds fail to significantly alter human accuracy or trust in the model.
arXiv Detail & Related papers (2020-07-23T20:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.