Navigating the Noisy Crowd: Finding Key Information for Claim Verification
- URL: http://arxiv.org/abs/2407.12425v1
- Date: Wed, 17 Jul 2024 09:24:10 GMT
- Title: Navigating the Noisy Crowd: Finding Key Information for Claim Verification
- Authors: Haisong Gong, Huanhuan Ma, Qiang Liu, Shu Wu, Liang Wang,
- Abstract summary: We propose EACon, a framework designed to find key information within evidence and verify each aspect of a claim separately.
Eccon finds keywords from the claim and employs fuzzy matching to select relevant keywords for each raw evidence piece.
Eccon deconstructs the original claim into subclaims, which are then verified against both abstracted and raw evidence individually.
- Score: 19.769771741059408
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Claim verification is a task that involves assessing the truthfulness of a given claim based on multiple evidence pieces. Using large language models (LLMs) for claim verification is a promising way. However, simply feeding all the evidence pieces to an LLM and asking if the claim is factual does not yield good results. The challenge lies in the noisy nature of both the evidence and the claim: evidence passages typically contain irrelevant information, with the key facts hidden within the context, while claims often convey multiple aspects simultaneously. To navigate this "noisy crowd" of information, we propose EACon (Evidence Abstraction and Claim Deconstruction), a framework designed to find key information within evidence and verify each aspect of a claim separately. EACon first finds keywords from the claim and employs fuzzy matching to select relevant keywords for each raw evidence piece. These keywords serve as a guide to extract and summarize critical information into abstracted evidence. Subsequently, EACon deconstructs the original claim into subclaims, which are then verified against both abstracted and raw evidence individually. We evaluate EACon using two open-source LLMs on two challenging datasets. Results demonstrate that EACon consistently and substantially improve LLMs' performance in claim verification.
Related papers
- Contrastive Learning to Improve Retrieval for Real-world Fact Checking [84.57583869042791]
We present Contrastive Fact-Checking Reranker (CFR), an improved retriever for fact-checking complex claims.
We leverage the AVeriTeC dataset, which annotates subquestions for claims with human written answers from evidence documents.
We find a 6% improvement in veracity classification accuracy on the dataset.
arXiv Detail & Related papers (2024-10-07T00:09:50Z) - Explainable Claim Verification via Knowledge-Grounded Reasoning with
Large Language Models [36.91218391728405]
This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK) Reasoning.
It can verify complex claims and generate explanations without the need for annotated evidence.
Our experiment results indicate that FOLK outperforms strong baselines on three datasets.
arXiv Detail & Related papers (2023-10-08T18:04:05Z) - Give Me More Details: Improving Fact-Checking with Latent Retrieval [58.706972228039604]
Evidence plays a crucial role in automated fact-checking.
Existing fact-checking systems either assume the evidence sentences are given or use the search snippets returned by the search engine.
We propose to incorporate full text from source documents as evidence and introduce two enriched datasets.
arXiv Detail & Related papers (2023-05-25T15:01:19Z) - Complex Claim Verification with Evidence Retrieved in the Wild [73.19998942259073]
We present the first fully automated pipeline to check real-world claims by retrieving raw evidence from the web.
Our pipeline includes five components: claim decomposition, raw document retrieval, fine-grained evidence retrieval, claim-focused summarization, and veracity judgment.
arXiv Detail & Related papers (2023-05-19T17:49:19Z) - Decker: Double Check with Heterogeneous Knowledge for Commonsense Fact
Verification [80.31112722910787]
We propose Decker, a commonsense fact verification model that is capable of bridging heterogeneous knowledge.
Experimental results on two commonsense fact verification benchmark datasets, CSQA2.0 and CREAK demonstrate the effectiveness of our Decker.
arXiv Detail & Related papers (2023-05-10T06:28:16Z) - Read it Twice: Towards Faithfully Interpretable Fact Verification by
Revisiting Evidence [59.81749318292707]
We propose a fact verification model named ReRead to retrieve evidence and verify claim.
The proposed system is able to achieve significant improvements upon best-reported models under different settings.
arXiv Detail & Related papers (2023-05-02T03:23:14Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - Topic-Aware Evidence Reasoning and Stance-Aware Aggregation for Fact
Verification [19.130541561303293]
We propose a novel topic-aware evidence reasoning and stance-aware aggregation model for fact verification.
Tests conducted on two benchmark datasets demonstrate the superiority of the proposed model over several state-of-the-art approaches for fact verification.
arXiv Detail & Related papers (2021-06-02T14:33:12Z) - Hierarchical Evidence Set Modeling for Automated Fact Extraction and
Verification [5.836068916903788]
Hierarchical Evidence Set Modeling (HESM) is a framework to extract evidence sets and verify a claim to be supported, refuted or not enough info.
Our experimental results show that HESM outperforms 7 state-of-the-art methods for fact extraction and claim verification.
arXiv Detail & Related papers (2020-10-10T22:27:17Z) - DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim
Verification [16.144566353074314]
We propose a Decision Tree-based Co-Attention model (DTCA) to discover evidence for explainable claim verification.
Specifically, we first construct Decision Tree-based Evidence model (DTE) to select comments with high credibility as evidence in a transparent and interpretable way.
We then design Co-attention Self-attention networks (CaSa) to make the selected evidence interact with claims.
arXiv Detail & Related papers (2020-04-28T12:19:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.