Generating Fact Checking Explanations
- URL: http://arxiv.org/abs/2004.05773v1
- Date: Mon, 13 Apr 2020 05:23:25 GMT
- Title: Generating Fact Checking Explanations
- Authors: Pepa Atanasova and Jakob Grue Simonsen and Christina Lioma and
Isabelle Augenstein
- Abstract summary: A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
- Score: 52.879658637466605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing work on automated fact checking is concerned with predicting
the veracity of claims based on metadata, social network spread, language used
in claims, and, more recently, evidence supporting or denying claims. A crucial
piece of the puzzle that is still missing is to understand how to automate the
most elaborate part of the process -- generating justifications for verdicts on
claims. This paper provides the first study of how these explanations can be
generated automatically based on available claim context, and how this task can
be modelled jointly with veracity prediction. Our results indicate that
optimising both objectives at the same time, rather than training them
separately, improves the performance of a fact checking system. The results of
a manual evaluation further suggest that the informativeness, coverage and
overall quality of the generated explanations are also improved in the
multi-task model.
Related papers
- Automated Justification Production for Claim Veracity in Fact Checking: A Survey on Architectures and Approaches [2.0140898354987353]
Automated Fact-Checking (AFC) is the automated verification of claim accuracy.
AFC is crucial in discerning truth from misinformation, especially given the huge amounts of content are generated online daily.
Current research focuses on predicting claim veracity through metadata analysis and language scrutiny.
arXiv Detail & Related papers (2024-07-09T01:54:13Z) - Enhancing Retrieval-Augmented LMs with a Two-stage Consistency Learning Compressor [4.35807211471107]
This work proposes a novel two-stage consistency learning approach for retrieved information compression in retrieval-augmented language models.
The proposed method is empirically validated across multiple datasets, demonstrating notable enhancements in precision and efficiency for question-answering tasks.
arXiv Detail & Related papers (2024-06-04T12:43:23Z) - Benchmarking the Generation of Fact Checking Explanations [19.363672064425504]
We focus on the generation of justifications (textual explanation of why a claim is classified as either true or false) and benchmark it with novel datasets and advanced baselines.
Results show that in justification production summarization benefits from the claim information.
Although cross-dataset experiments suffer from performance degradation, a unique model trained on a combination of the two datasets is able to retain style information in an efficient manner.
arXiv Detail & Related papers (2023-08-29T10:40:46Z) - Interpretable Automatic Fine-grained Inconsistency Detection in Text
Summarization [56.94741578760294]
We propose the task of fine-grained inconsistency detection, the goal of which is to predict the fine-grained types of factual errors in a summary.
Motivated by how humans inspect factual inconsistency in summaries, we propose an interpretable fine-grained inconsistency detection model, FineGrainFact.
arXiv Detail & Related papers (2023-05-23T22:11:47Z) - ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning [63.77667876176978]
Large language models show improved downstream task interpretability when prompted to generate step-by-step reasoning to justify their final answers.
These reasoning steps greatly improve model interpretability and verification, but objectively studying their correctness is difficult.
We present ROS, a suite of interpretable, unsupervised automatic scores that improve and extend previous text generation evaluation metrics.
arXiv Detail & Related papers (2022-12-15T15:52:39Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - A Survey on Automated Fact-Checking [18.255327608480165]
We survey automated fact-checking stemming from natural language processing, and discuss its connections to related tasks and disciplines.
We present an overview of existing datasets and models, aiming to unify the various definitions given and identify common concepts.
arXiv Detail & Related papers (2021-08-26T16:34:51Z) - Automated Concatenation of Embeddings for Structured Prediction [75.44925576268052]
We propose Automated Concatenation of Embeddings (ACE) to automate the process of finding better concatenations of embeddings for structured prediction tasks.
We follow strategies in reinforcement learning to optimize the parameters of the controller and compute the reward based on the accuracy of a task model.
arXiv Detail & Related papers (2020-10-10T14:03:20Z) - DeSePtion: Dual Sequence Prediction and Adversarial Examples for
Improved Fact-Checking [46.13738685855884]
We show that current systems for fact-checking are vulnerable to three categories of realistic challenges for fact-checking.
We present a system designed to be resilient to these "attacks" using multiple pointer networks for document selection.
We find that in handling these attacks we obtain state-of-the-art results on FEVER, largely due to improved evidence retrieval.
arXiv Detail & Related papers (2020-04-27T15:18:49Z) - Enhancing Factual Consistency of Abstractive Summarization [57.67609672082137]
We propose a fact-aware summarization model FASum to extract and integrate factual relations into the summary generation process.
We then design a factual corrector model FC to automatically correct factual errors from summaries generated by existing systems.
arXiv Detail & Related papers (2020-03-19T07:36:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.