Generating Fact Checking Explanations
- URL: http://arxiv.org/abs/2004.05773v1
- Date: Mon, 13 Apr 2020 05:23:25 GMT
- Title: Generating Fact Checking Explanations
- Authors: Pepa Atanasova and Jakob Grue Simonsen and Christina Lioma and
Isabelle Augenstein
- Abstract summary: A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
- Score: 52.879658637466605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing work on automated fact checking is concerned with predicting
the veracity of claims based on metadata, social network spread, language used
in claims, and, more recently, evidence supporting or denying claims. A crucial
piece of the puzzle that is still missing is to understand how to automate the
most elaborate part of the process -- generating justifications for verdicts on
claims. This paper provides the first study of how these explanations can be
generated automatically based on available claim context, and how this task can
be modelled jointly with veracity prediction. Our results indicate that
optimising both objectives at the same time, rather than training them
separately, improves the performance of a fact checking system. The results of
a manual evaluation further suggest that the informativeness, coverage and
overall quality of the generated explanations are also improved in the
multi-task model.
Related papers
- Likelihood as a Performance Gauge for Retrieval-Augmented Generation [78.28197013467157]
We show that likelihoods serve as an effective gauge for language model performance.
We propose two methods that use question likelihood as a gauge for selecting and constructing prompts that lead to better performance.
arXiv Detail & Related papers (2024-11-12T13:14:09Z) - FactLens: Benchmarking Fine-Grained Fact Verification [6.814173254027381]
We advocate for a shift toward fine-grained verification, where complex claims are broken down into smaller sub-claims for individual verification.
We introduce FactLens, a benchmark for evaluating fine-grained fact verification, with metrics and automated evaluators of sub-claim quality.
Our results show alignment between automated FactLens evaluators and human judgments, and we discuss the impact of sub-claim characteristics on the overall verification performance.
arXiv Detail & Related papers (2024-11-08T21:26:57Z) - Automated Justification Production for Claim Veracity in Fact Checking: A Survey on Architectures and Approaches [2.0140898354987353]
Automated Fact-Checking (AFC) is the automated verification of claim accuracy.
AFC is crucial in discerning truth from misinformation, especially given the huge amounts of content are generated online daily.
Current research focuses on predicting claim veracity through metadata analysis and language scrutiny.
arXiv Detail & Related papers (2024-07-09T01:54:13Z) - Benchmarking the Generation of Fact Checking Explanations [19.363672064425504]
We focus on the generation of justifications (textual explanation of why a claim is classified as either true or false) and benchmark it with novel datasets and advanced baselines.
Results show that in justification production summarization benefits from the claim information.
Although cross-dataset experiments suffer from performance degradation, a unique model trained on a combination of the two datasets is able to retain style information in an efficient manner.
arXiv Detail & Related papers (2023-08-29T10:40:46Z) - Interpretable Automatic Fine-grained Inconsistency Detection in Text
Summarization [56.94741578760294]
We propose the task of fine-grained inconsistency detection, the goal of which is to predict the fine-grained types of factual errors in a summary.
Motivated by how humans inspect factual inconsistency in summaries, we propose an interpretable fine-grained inconsistency detection model, FineGrainFact.
arXiv Detail & Related papers (2023-05-23T22:11:47Z) - ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning [63.77667876176978]
Large language models show improved downstream task interpretability when prompted to generate step-by-step reasoning to justify their final answers.
These reasoning steps greatly improve model interpretability and verification, but objectively studying their correctness is difficult.
We present ROS, a suite of interpretable, unsupervised automatic scores that improve and extend previous text generation evaluation metrics.
arXiv Detail & Related papers (2022-12-15T15:52:39Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - A Survey on Automated Fact-Checking [18.255327608480165]
We survey automated fact-checking stemming from natural language processing, and discuss its connections to related tasks and disciplines.
We present an overview of existing datasets and models, aiming to unify the various definitions given and identify common concepts.
arXiv Detail & Related papers (2021-08-26T16:34:51Z) - Automated Concatenation of Embeddings for Structured Prediction [75.44925576268052]
We propose Automated Concatenation of Embeddings (ACE) to automate the process of finding better concatenations of embeddings for structured prediction tasks.
We follow strategies in reinforcement learning to optimize the parameters of the controller and compute the reward based on the accuracy of a task model.
arXiv Detail & Related papers (2020-10-10T14:03:20Z) - DeSePtion: Dual Sequence Prediction and Adversarial Examples for
Improved Fact-Checking [46.13738685855884]
We show that current systems for fact-checking are vulnerable to three categories of realistic challenges for fact-checking.
We present a system designed to be resilient to these "attacks" using multiple pointer networks for document selection.
We find that in handling these attacks we obtain state-of-the-art results on FEVER, largely due to improved evidence retrieval.
arXiv Detail & Related papers (2020-04-27T15:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.