Factual Error Correction of Claims
- URL: http://arxiv.org/abs/2012.15788v1
- Date: Thu, 31 Dec 2020 18:11:26 GMT
- Title: Factual Error Correction of Claims
- Authors: James Thorne, Andreas Vlachos
- Abstract summary: This paper introduces the task of factual error correction.
It provides a mechanism to correct written texts that contain misinformation.
It acts as an inherent explanation for claims already partially supported by evidence.
- Score: 18.52583883901634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces the task of factual error correction: performing edits
to a claim so that the generated rewrite is supported by evidence. This serves
two purposes: firstly this provides a mechanism to correct written texts that
contain misinformation, and secondly, this acts as an inherent explanation for
claims already partially supported by evidence. We demonstrate that factual
error correction is possible without the need for any additional training data
using distant-supervision and retrieved evidence. We release a dataset of
65,000 instances, based on a recent fact verification dataset, to compare our
distantly-supervised method to a fully supervised ceiling system. Our manual
evaluation indicates which automated evaluation metrics best correlate with
human judgements of factuality and whether errors were actually corrected.
Related papers
- Fact Checking Beyond Training Set [64.88575826304024]
We show that the retriever-reader suffers from performance deterioration when it is trained on labeled data from one domain and used in another domain.
We propose an adversarial algorithm to make the retriever component robust against distribution shift.
We then construct eight fact checking scenarios from these datasets, and compare our model to a set of strong baseline models.
arXiv Detail & Related papers (2024-03-27T15:15:14Z) - GenAudit: Fixing Factual Errors in Language Model Outputs with Evidence [64.95492752484171]
We present GenAudit -- a tool intended to assist fact-checking LLM responses for document-grounded tasks.
We train models to execute these tasks, and design an interactive interface to present suggested edits and evidence to users.
To ensure that most errors are flagged by the system, we propose a method that can increase the error recall while minimizing impact on precision.
arXiv Detail & Related papers (2024-02-19T21:45:55Z) - Give Me More Details: Improving Fact-Checking with Latent Retrieval [58.706972228039604]
Evidence plays a crucial role in automated fact-checking.
Existing fact-checking systems either assume the evidence sentences are given or use the search snippets returned by the search engine.
We propose to incorporate full text from source documents as evidence and introduce two enriched datasets.
arXiv Detail & Related papers (2023-05-25T15:01:19Z) - WiCE: Real-World Entailment for Claims in Wikipedia [63.234352061821625]
We propose WiCE, a new fine-grained textual entailment dataset built on natural claim and evidence pairs extracted from Wikipedia.
In addition to standard claim-level entailment, WiCE provides entailment judgments over sub-sentence units of the claim.
We show that real claims in our dataset involve challenging verification and retrieval problems that existing models fail to address.
arXiv Detail & Related papers (2023-03-02T17:45:32Z) - Factual Error Correction for Abstractive Summaries Using Entity
Retrieval [57.01193722520597]
We propose an efficient factual error correction system RFEC based on entities retrieval post-editing process.
RFEC retrieves the evidence sentences from the original document by comparing the sentences with the target summary.
Next, RFEC detects the entity-level errors in the summaries by considering the evidence sentences and substitutes the wrong entities with the accurate entities from the evidence sentences.
arXiv Detail & Related papers (2022-04-18T11:35:02Z) - Evidence-based Factual Error Correction [18.52583883901634]
This paper introduces the task of factual error correction.
It provides a mechanism to correct written texts that are refuted or only partially supported by evidence.
arXiv Detail & Related papers (2021-06-02T11:00:17Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.