From Chaos to Clarity: Claim Normalization to Empower Fact-Checking
- URL: http://arxiv.org/abs/2310.14338v3
- Date: Mon, 12 Feb 2024 06:30:44 GMT
- Title: From Chaos to Clarity: Claim Normalization to Empower Fact-Checking
- Authors: Megha Sundriyal, Tanmoy Chakraborty, Preslav Nakov
- Abstract summary: Claim Normalization (aka ClaimNorm) aims to decompose complex and noisy social media posts into more straightforward and understandable forms.
We propose CACN, a pioneering approach that leverages chain-of-thought and claim check-worthiness estimation.
Our experiments demonstrate that CACN outperforms several baselines across various evaluation measures.
- Score: 57.024192702939736
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: With the rise of social media, users are exposed to many misleading claims.
However, the pervasive noise inherent in these posts presents a challenge in
identifying precise and prominent claims that require verification. Extracting
the important claims from such posts is arduous and time-consuming, yet it is
an underexplored problem. Here, we aim to bridge this gap. We introduce a novel
task, Claim Normalization (aka ClaimNorm), which aims to decompose complex and
noisy social media posts into more straightforward and understandable forms,
termed normalized claims. We propose CACN, a pioneering approach that leverages
chain-of-thought and claim check-worthiness estimation, mimicking human
reasoning processes, to comprehend intricate claims. Moreover, we capitalize on
the in-context learning capabilities of large language models to provide
guidance and to improve claim normalization. To evaluate the effectiveness of
our proposed model, we meticulously compile a comprehensive real-world dataset,
CLAN, comprising more than 6k instances of social media posts alongside their
respective normalized claims. Our experiments demonstrate that CACN outperforms
several baselines across various evaluation measures. Finally, our rigorous
error analysis validates CACN's capabilities and pitfalls.
Related papers
- CompassVerifier: A Unified and Robust Verifier for LLMs Evaluation and Outcome Reward [50.97588334916863]
We develop CompassVerifier, an accurate and robust lightweight verifier model for evaluation and outcome reward.<n>It demonstrates multi-domain competency spanning math, knowledge, and diverse reasoning tasks, with the capability to process various answer types.<n>We introduce VerifierBench benchmark comprising model outputs collected from multiple data sources, augmented through manual analysis of metaerror patterns to enhance CompassVerifier.
arXiv Detail & Related papers (2025-08-05T17:55:24Z) - CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection [60.98964268961243]
We propose that guiding models to perform a systematic and comprehensive reasoning process allows models to execute much finer-grained and accurate entailment decisions.<n>We define a 3-step reasoning process, consisting of (i) claim decomposition, (ii) sub-claim attribution and entailment classification, and (iii) aggregated classification, showing that such guided reasoning indeed yields improved hallucination detection.
arXiv Detail & Related papers (2025-06-05T17:02:52Z) - A Generative-AI-Driven Claim Retrieval System Capable of Detecting and Retrieving Claims from Social Media Platforms in Multiple Languages [1.3331869040581863]
This research introduces an approach that retrieves previously fact-checked claims, evaluates their relevance to a given input, and provides supplementary information to support fact-checkers.
Our method employs large language models (LLMs) to filter irrelevant fact-checks and generate concise summaries and explanations.
Our results demonstrate that LLMs are able to filter out many irrelevant fact-checks and, therefore, reduce effort and streamline the fact-checking process.
arXiv Detail & Related papers (2025-04-29T11:49:05Z) - CRAVE: A Conflicting Reasoning Approach for Explainable Claim Verification Using LLMs [15.170312674645535]
CRAVE is a Conflicting Reasoning Approach for explainable claim VErification.
It can verify complex claims based on the conflicting rationales reasoned by large language models.
CRAVE achieves much better performance than state-of-the-art methods.
arXiv Detail & Related papers (2025-04-21T07:20:31Z) - When Claims Evolve: Evaluating and Enhancing the Robustness of Embedding Models Against Misinformation Edits [5.443263983810103]
As users interact with claims online, they often introduce edits, and it remains unclear whether current embedding models are robust to such edits.<n>We introduce a perturbation framework that generates valid and natural claim variations, enabling us to assess the robustness of a wide-range of sentence embedding models.<n>Our evaluation reveals that standard embedding models exhibit notable performance drops on edited claims, while LLM-distilled embedding models offer improved robustness at a higher computational cost.
arXiv Detail & Related papers (2025-03-05T11:47:32Z) - Claim Extraction for Fact-Checking: Data, Models, and Automated Metrics [0.0]
We release the FEVERFact dataset, with 17K atomic factual claims extracted from 4K contextualised Wikipedia sentences.
For each metric, we implement a scale using a reduction to an already-explored NLP task.
We validate our metrics against human grading of generic claims, to see that the model ranking on $F_fact$, our hardest metric, did not change.
arXiv Detail & Related papers (2025-02-07T14:20:45Z) - FactLens: Benchmarking Fine-Grained Fact Verification [6.814173254027381]
We advocate for a shift toward fine-grained verification, where complex claims are broken down into smaller sub-claims for individual verification.
We introduce FactLens, a benchmark for evaluating fine-grained fact verification, with metrics and automated evaluators of sub-claim quality.
Our results show alignment between automated FactLens evaluators and human judgments, and we discuss the impact of sub-claim characteristics on the overall verification performance.
arXiv Detail & Related papers (2024-11-08T21:26:57Z) - In-context Demonstration Matters: On Prompt Optimization for Pseudo-Supervision Refinement [71.60563181678323]
Large language models (LLMs) have achieved great success across diverse tasks, and fine-tuning is sometimes needed to further enhance generation quality.<n>To handle these challenges, a direct solution is to generate high-confidence'' data from unsupervised downstream tasks.<n>We propose a novel approach, pseudo-supervised demonstrations aligned prompt optimization (PAPO) algorithm, which jointly refines both the prompt and the overall pseudo-supervision.
arXiv Detail & Related papers (2024-10-04T03:39:28Z) - The Ability of Large Language Models to Evaluate Constraint-satisfaction in Agent Responses to Open-ended Requests [0.6249768559720121]
We develop and release a novel Arithmetic Constraint-Satisfaction (ACS) benchmarking dataset.
This dataset consists of complex user requests with corresponding constraints, agent responses and human labels indicating each constraint's satisfaction level in the response.
We show that most models still have a significant headroom for improvement, and that errors primarily stem from reasoning issues.
arXiv Detail & Related papers (2024-09-22T09:27:42Z) - Missci: Reconstructing Fallacies in Misrepresented Science [84.32990746227385]
Health-related misinformation on social networks can lead to poor decision-making and real-world dangers.
Missci is a novel argumentation theoretical model for fallacious reasoning.
We present Missci as a dataset to test the critical reasoning abilities of large language models.
arXiv Detail & Related papers (2024-06-05T12:11:10Z) - Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models [13.532180752491954]
We demonstrate a dramatic breakdown of function and reasoning capabilities of state-of-the-art models trained at the largest available scales.
The breakdown is dramatic, as models show strong fluctuations across even slight problem variations that should not affect problem solving.
We take these initial observations to stimulate urgent re-assessment of the claimed capabilities of current generation of Large Language Models.
arXiv Detail & Related papers (2024-06-04T07:43:33Z) - AFaCTA: Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators [38.523194864405326]
AFaCTA is a novel framework that assists in the annotation of factual claims.
AFaCTA calibrates its annotation confidence with consistency along three predefined reasoning paths.
Our analyses also result in PoliClaim, a comprehensive claim detection dataset spanning diverse political topics.
arXiv Detail & Related papers (2024-02-16T20:59:57Z) - CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense
Question Answering [56.592385613002584]
We propose Conceptualization-Augmented Reasoner (CAR) to tackle the task of zero-shot commonsense question answering.
CAR abstracts a commonsense knowledge triple to many higher-level instances, which increases the coverage of CommonSense Knowledge Bases.
CAR more robustly generalizes to answering questions about zero-shot commonsense scenarios than existing methods.
arXiv Detail & Related papers (2023-05-24T08:21:31Z) - WiCE: Real-World Entailment for Claims in Wikipedia [63.234352061821625]
We propose WiCE, a new fine-grained textual entailment dataset built on natural claim and evidence pairs extracted from Wikipedia.
In addition to standard claim-level entailment, WiCE provides entailment judgments over sub-sentence units of the claim.
We show that real claims in our dataset involve challenging verification and retrieval problems that existing models fail to address.
arXiv Detail & Related papers (2023-03-02T17:45:32Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.