Leveraging Social Discourse to Measure Check-worthiness of Claims for
Fact-checking
- URL: http://arxiv.org/abs/2309.09274v1
- Date: Sun, 17 Sep 2023 13:42:41 GMT
- Title: Leveraging Social Discourse to Measure Check-worthiness of Claims for
Fact-checking
- Authors: Megha Sundriyal, Md Shad Akhtar, Tanmoy Chakraborty
- Abstract summary: We present CheckIt, a manually annotated large Twitter dataset for fine-grained claim check-worthiness.
We benchmark our dataset against a unified approach, CheckMate, that jointly determines whether a claim is check-worthy and the factors that led to that conclusion.
- Score: 36.21314290592325
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The expansion of online social media platforms has led to a surge in online
content consumption. However, this has also paved the way for disseminating
false claims and misinformation. As a result, there is an escalating demand for
a substantial workforce to sift through and validate such unverified claims.
Currently, these claims are manually verified by fact-checkers. Still, the
volume of online content often outweighs their potency, making it difficult for
them to validate every single claim in a timely manner. Thus, it is critical to
determine which assertions are worth fact-checking and prioritize claims that
require immediate attention. Multiple factors contribute to determining whether
a claim necessitates fact-checking, encompassing factors such as its factual
correctness, potential impact on the public, the probability of inciting
hatred, and more. Despite several efforts to address claim check-worthiness, a
systematic approach to identify these factors remains an open challenge. To
this end, we introduce a new task of fine-grained claim check-worthiness, which
underpins all of these factors and provides probable human grounds for
identifying a claim as check-worthy. We present CheckIt, a manually annotated
large Twitter dataset for fine-grained claim check-worthiness. We benchmark our
dataset against a unified approach, CheckMate, that jointly determines whether
a claim is check-worthy and the factors that led to that conclusion. We compare
our suggested system with several baseline systems. Finally, we report a
thorough analysis of results and human assessment, validating the efficacy of
integrating check-worthiness factors in detecting claims worth fact-checking.
Related papers
- From Chaos to Clarity: Claim Normalization to Empower Fact-Checking [57.024192702939736]
Claim Normalization (aka ClaimNorm) aims to decompose complex and noisy social media posts into more straightforward and understandable forms.
We propose CACN, a pioneering approach that leverages chain-of-thought and claim check-worthiness estimation.
Our experiments demonstrate that CACN outperforms several baselines across various evaluation measures.
arXiv Detail & Related papers (2023-10-22T16:07:06Z) - Complex Claim Verification with Evidence Retrieved in the Wild [73.19998942259073]
We present the first fully automated pipeline to check real-world claims by retrieving raw evidence from the web.
Our pipeline includes five components: claim decomposition, raw document retrieval, fine-grained evidence retrieval, claim-focused summarization, and veracity judgment.
arXiv Detail & Related papers (2023-05-19T17:49:19Z) - Read it Twice: Towards Faithfully Interpretable Fact Verification by
Revisiting Evidence [59.81749318292707]
We propose a fact verification model named ReRead to retrieve evidence and verify claim.
The proposed system is able to achieve significant improvements upon best-reported models under different settings.
arXiv Detail & Related papers (2023-05-02T03:23:14Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked
Claims in a Document [27.076320857009655]
Given an input document, it aims to detect all sentences that contain a claim that can be verified by some previously fact-checked claims.
The output is a re-ranked list of the document sentences, so that those that can be verified are ranked as high as possible.
Our analysis demonstrates the importance of modeling text similarity and stance, while also taking into account the veracity of the retrieved previously fact-checked claims.
arXiv Detail & Related papers (2021-09-14T13:46:52Z) - Explainable Automated Fact-Checking for Public Health Claims [11.529816799331979]
We present the first study of explainable fact-checking for claims which require specific expertise.
For our case study we choose the setting of public health.
We explore two tasks: veracity prediction and explanation generation.
arXiv Detail & Related papers (2020-10-19T23:51:33Z) - Too Many Claims to Fact-Check: Prioritizing Political Claims Based on
Check-Worthiness [1.2891210250935146]
We propose a model prioritizing the claims based on their check-worthiness.
We use BERT model with additional features including domain-specific controversial topics, word embeddings, and others.
arXiv Detail & Related papers (2020-04-17T10:55:07Z) - Claim Check-Worthiness Detection as Positive Unlabelled Learning [53.24606510691877]
Claim check-worthiness detection is a critical component of fact checking systems.
We illuminate a central challenge in claim check-worthiness detection underlying all of these tasks.
Our best performing method is a unified approach which automatically corrects for this using a variant of positive unlabelled learning.
arXiv Detail & Related papers (2020-03-05T16:06:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.