Factual Error Correction of Claims
- URL: http://arxiv.org/abs/2012.15788v1
- Date: Thu, 31 Dec 2020 18:11:26 GMT
- Title: Factual Error Correction of Claims
- Authors: James Thorne, Andreas Vlachos
- Abstract summary: This paper introduces the task of factual error correction.
It provides a mechanism to correct written texts that contain misinformation.
It acts as an inherent explanation for claims already partially supported by evidence.
- Score: 18.52583883901634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces the task of factual error correction: performing edits
to a claim so that the generated rewrite is supported by evidence. This serves
two purposes: firstly this provides a mechanism to correct written texts that
contain misinformation, and secondly, this acts as an inherent explanation for
claims already partially supported by evidence. We demonstrate that factual
error correction is possible without the need for any additional training data
using distant-supervision and retrieved evidence. We release a dataset of
65,000 instances, based on a recent fact verification dataset, to compare our
distantly-supervised method to a fully supervised ceiling system. Our manual
evaluation indicates which automated evaluation metrics best correlate with
human judgements of factuality and whether errors were actually corrected.
Related papers
- Fact Checking Beyond Training Set [64.88575826304024]
We show that the retriever-reader suffers from performance deterioration when it is trained on labeled data from one domain and used in another domain.
We propose an adversarial algorithm to make the retriever component robust against distribution shift.
We then construct eight fact checking scenarios from these datasets, and compare our model to a set of strong baseline models.
arXiv Detail & Related papers (2024-03-27T15:15:14Z) - Give Me More Details: Improving Fact-Checking with Latent Retrieval [58.706972228039604]
Evidence plays a crucial role in automated fact-checking.
Existing fact-checking systems either assume the evidence sentences are given or use the search snippets returned by the search engine.
We propose to incorporate full text from source documents as evidence and introduce two enriched datasets.
arXiv Detail & Related papers (2023-05-25T15:01:19Z) - Read it Twice: Towards Faithfully Interpretable Fact Verification by
Revisiting Evidence [59.81749318292707]
We propose a fact verification model named ReRead to retrieve evidence and verify claim.
The proposed system is able to achieve significant improvements upon best-reported models under different settings.
arXiv Detail & Related papers (2023-05-02T03:23:14Z) - WiCE: Real-World Entailment for Claims in Wikipedia [63.234352061821625]
We propose WiCE, a new fine-grained textual entailment dataset built on natural claim and evidence pairs extracted from Wikipedia.
In addition to standard claim-level entailment, WiCE provides entailment judgments over sub-sentence units of the claim.
We show that real claims in our dataset involve challenging verification and retrieval problems that existing models fail to address.
arXiv Detail & Related papers (2023-03-02T17:45:32Z) - Factual Error Correction for Abstractive Summaries Using Entity
Retrieval [57.01193722520597]
We propose an efficient factual error correction system RFEC based on entities retrieval post-editing process.
RFEC retrieves the evidence sentences from the original document by comparing the sentences with the target summary.
Next, RFEC detects the entity-level errors in the summaries by considering the evidence sentences and substitutes the wrong entities with the accurate entities from the evidence sentences.
arXiv Detail & Related papers (2022-04-18T11:35:02Z) - Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked
Claims in a Document [27.076320857009655]
Given an input document, it aims to detect all sentences that contain a claim that can be verified by some previously fact-checked claims.
The output is a re-ranked list of the document sentences, so that those that can be verified are ranked as high as possible.
Our analysis demonstrates the importance of modeling text similarity and stance, while also taking into account the veracity of the retrieved previously fact-checked claims.
arXiv Detail & Related papers (2021-09-14T13:46:52Z) - Evidence-based Factual Error Correction [18.52583883901634]
This paper introduces the task of factual error correction.
It provides a mechanism to correct written texts that are refuted or only partially supported by evidence.
arXiv Detail & Related papers (2021-06-02T11:00:17Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.