Generating Fluent Fact Checking Explanations with Unsupervised
Post-Editing
- URL: http://arxiv.org/abs/2112.06924v1
- Date: Mon, 13 Dec 2021 15:31:07 GMT
- Title: Generating Fluent Fact Checking Explanations with Unsupervised
Post-Editing
- Authors: Shailza Jolly, Pepa Atanasova, Isabelle Augenstein
- Abstract summary: We present an iterative edit-based algorithm that uses only phrase-level edits to perform unsupervised post-editing of ruling comments.
We show that our model generates explanations that are fluent, readable, non-redundant, and cover important information for the fact check.
- Score: 22.5444107755288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fact-checking systems have become important tools to verify fake and
misguiding news. These systems become more trustworthy when human-readable
explanations accompany the veracity labels. However, manual collection of such
explanations is expensive and time-consuming. Recent works frame explanation
generation as extractive summarization, and propose to automatically select a
sufficient subset of the most important facts from the ruling comments (RCs) of
a professional journalist to obtain fact-checking explanations. However, these
explanations lack fluency and sentence coherence. In this work, we present an
iterative edit-based algorithm that uses only phrase-level edits to perform
unsupervised post-editing of disconnected RCs. To regulate our editing
algorithm, we use a scoring function with components including fluency and
semantic preservation. In addition, we show the applicability of our approach
in a completely unsupervised setting. We experiment with two benchmark
datasets, LIAR-PLUS and PubHealth. We show that our model generates
explanations that are fluent, readable, non-redundant, and cover important
information for the fact check.
Related papers
- Analysing Zero-Shot Readability-Controlled Sentence Simplification [54.09069745799918]
We investigate how different types of contextual information affect a model's ability to generate sentences with the desired readability.
Results show that all tested models struggle to simplify sentences due to models' limitations and characteristics of the source sentences.
Our experiments also highlight the need for better automatic evaluation metrics tailored to RCTS.
arXiv Detail & Related papers (2024-09-30T12:36:25Z) - Evaluating Evidence Attribution in Generated Fact Checking Explanations [48.776087871960584]
We introduce a novel evaluation protocol, citation masking and recovery, to assess attribution quality in generated explanations.
Experiments reveal that the best-performing LLMs still generate explanations with inaccurate attributions.
Human-curated evidence is essential for generating better explanations.
arXiv Detail & Related papers (2024-06-18T14:13:13Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - WeCheck: Strong Factual Consistency Checker via Weakly Supervised
Learning [40.5830891229718]
We propose a weakly supervised framework that aggregates multiple resources to train a precise and efficient factual metric, namely WeCheck.
Comprehensive experiments on a variety of tasks demonstrate the strong performance of WeCheck, which achieves a 3.4% absolute improvement over previous state-of-the-art methods on TRUE benchmark on average.
arXiv Detail & Related papers (2022-12-20T08:04:36Z) - Grounded Keys-to-Text Generation: Towards Factual Open-Ended Generation [92.1582872870226]
We propose a new grounded keys-to-text generation task.
The task is to generate a factual description about an entity given a set of guiding keys, and grounding passages.
Inspired by recent QA-based evaluation measures, we propose an automatic metric, MAFE, for factual correctness of generated descriptions.
arXiv Detail & Related papers (2022-12-04T23:59:41Z) - Factual Error Correction for Abstractive Summaries Using Entity
Retrieval [57.01193722520597]
We propose an efficient factual error correction system RFEC based on entities retrieval post-editing process.
RFEC retrieves the evidence sentences from the original document by comparing the sentences with the target summary.
Next, RFEC detects the entity-level errors in the summaries by considering the evidence sentences and substitutes the wrong entities with the accurate entities from the evidence sentences.
arXiv Detail & Related papers (2022-04-18T11:35:02Z) - Assessing Effectiveness of Using Internal Signals for Check-Worthy Claim
Identification in Unlabeled Data for Automated Fact-Checking [6.193231258199234]
This paper explores methodology to identify check-worthy claim sentences from fake news articles.
We leverage two internal supervisory signals - headline and the abstractive summary - to rank the sentences.
We show that while the headline has more gisting similarity with how a fact-checking website writes a claim, the summary-based pipeline is the most promising for an end-to-end fact-checking system.
arXiv Detail & Related papers (2021-11-02T16:17:20Z) - Factual Error Correction of Claims [18.52583883901634]
This paper introduces the task of factual error correction.
It provides a mechanism to correct written texts that contain misinformation.
It acts as an inherent explanation for claims already partially supported by evidence.
arXiv Detail & Related papers (2020-12-31T18:11:26Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.