Explainable Automated Fact-Checking for Public Health Claims
- URL: http://arxiv.org/abs/2010.09926v1
- Date: Mon, 19 Oct 2020 23:51:33 GMT
- Title: Explainable Automated Fact-Checking for Public Health Claims
- Authors: Neema Kotonya and Francesca Toni
- Abstract summary: We present the first study of explainable fact-checking for claims which require specific expertise.
For our case study we choose the setting of public health.
We explore two tasks: veracity prediction and explanation generation.
- Score: 11.529816799331979
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fact-checking is the task of verifying the veracity of claims by assessing
their assertions against credible evidence. The vast majority of fact-checking
studies focus exclusively on political claims. Very little research explores
fact-checking for other topics, specifically subject matters for which
expertise is required. We present the first study of explainable fact-checking
for claims which require specific expertise. For our case study we choose the
setting of public health. To support this case study we construct a new dataset
PUBHEALTH of 11.8K claims accompanied by journalist crafted, gold standard
explanations (i.e., judgments) to support the fact-check labels for claims. We
explore two tasks: veracity prediction and explanation generation. We also
define and evaluate, with humans and computationally, three coherence
properties of explanation quality. Our results indicate that, by training on
in-domain data, gains can be made in explainable, automated fact-checking for
claims which require specific expertise.
Related papers
- Robust Claim Verification Through Fact Detection [17.29665711917281]
Our novel approach, FactDetect, leverages Large Language Models (LLMs) to generate concise factual statements from evidence.
The generated facts are then combined with the claim and evidence.
Our method demonstrates competitive results in the supervised claim verification model by 15% on the F1 score.
arXiv Detail & Related papers (2024-07-25T20:03:43Z) - AFaCTA: Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators [38.523194864405326]
AFaCTA is a novel framework that assists in the annotation of factual claims.
AFaCTA calibrates its annotation confidence with consistency along three predefined reasoning paths.
Our analyses also result in PoliClaim, a comprehensive claim detection dataset spanning diverse political topics.
arXiv Detail & Related papers (2024-02-16T20:59:57Z) - What Makes Medical Claims (Un)Verifiable? Analyzing Entity and Relation
Properties for Fact Verification [8.086400003948143]
The BEAR-Fact corpus is the first corpus for scientific fact verification annotated with subject-relation-object triplets, evidence documents, and fact-checking verdicts.
We show that it is possible to reliably estimate the success of evidence retrieval purely from the claim text.
The dataset is available at http://www.ims.uni-stuttgart.de/data/bioclaim.
arXiv Detail & Related papers (2024-02-02T12:27:58Z) - Decker: Double Check with Heterogeneous Knowledge for Commonsense Fact
Verification [80.31112722910787]
We propose Decker, a commonsense fact verification model that is capable of bridging heterogeneous knowledge.
Experimental results on two commonsense fact verification benchmark datasets, CSQA2.0 and CREAK demonstrate the effectiveness of our Decker.
arXiv Detail & Related papers (2023-05-10T06:28:16Z) - Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for
Misinformation [67.69725605939315]
Misinformation emerges in times of uncertainty when credible information is limited.
This is challenging for NLP-based fact-checking as it relies on counter-evidence, which may not yet be available.
arXiv Detail & Related papers (2022-10-25T09:40:48Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - Generating Scientific Claims for Zero-Shot Scientific Fact Checking [54.62086027306609]
Automated scientific fact checking is difficult due to the complexity of scientific language and a lack of significant amounts of training data.
We propose scientific claim generation, the task of generating one or more atomic and verifiable claims from scientific sentences.
We also demonstrate its usefulness in zero-shot fact checking for biomedical claims.
arXiv Detail & Related papers (2022-03-24T11:29:20Z) - FaVIQ: FAct Verification from Information-seeking Questions [77.7067957445298]
We construct a large-scale fact verification dataset called FaVIQ using information-seeking questions posed by real users.
Our claims are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
arXiv Detail & Related papers (2021-07-05T17:31:44Z) - Misinformation Has High Perplexity [55.47422012881148]
We propose to leverage the perplexity to debunk false claims in an unsupervised manner.
First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time.
arXiv Detail & Related papers (2020-06-08T15:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.