Topic-Aware Evidence Reasoning and Stance-Aware Aggregation for Fact
Verification
- URL: http://arxiv.org/abs/2106.01191v1
- Date: Wed, 2 Jun 2021 14:33:12 GMT
- Title: Topic-Aware Evidence Reasoning and Stance-Aware Aggregation for Fact
Verification
- Authors: Jiasheng Si, Deyu Zhou, Tongzhe Li, Xingyu Shi, Yulan He
- Abstract summary: We propose a novel topic-aware evidence reasoning and stance-aware aggregation model for fact verification.
Tests conducted on two benchmark datasets demonstrate the superiority of the proposed model over several state-of-the-art approaches for fact verification.
- Score: 19.130541561303293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fact verification is a challenging task that requires simultaneously
reasoning and aggregating over multiple retrieved pieces of evidence to
evaluate the truthfulness of a claim. Existing approaches typically (i) explore
the semantic interaction between the claim and evidence at different
granularity levels but fail to capture their topical consistency during the
reasoning process, which we believe is crucial for verification; (ii) aggregate
multiple pieces of evidence equally without considering their implicit stances
to the claim, thereby introducing spurious information. To alleviate the above
issues, we propose a novel topic-aware evidence reasoning and stance-aware
aggregation model for more accurate fact verification, with the following four
key properties: 1) checking topical consistency between the claim and evidence;
2) maintaining topical coherence among multiple pieces of evidence; 3) ensuring
semantic similarity between the global topic information and the semantic
representation of evidence; 4) aggregating evidence based on their implicit
stances to the claim. Extensive experiments conducted on the two benchmark
datasets demonstrate the superiority of the proposed model over several
state-of-the-art approaches for fact verification. The source code can be
obtained from https://github.com/jasenchn/TARSA.
Related papers
- Navigating the Noisy Crowd: Finding Key Information for Claim Verification [19.769771741059408]
We propose EACon, a framework designed to find key information within evidence and verify each aspect of a claim separately.
Eccon finds keywords from the claim and employs fuzzy matching to select relevant keywords for each raw evidence piece.
Eccon deconstructs the original claim into subclaims, which are then verified against both abstracted and raw evidence individually.
arXiv Detail & Related papers (2024-07-17T09:24:10Z) - EX-FEVER: A Dataset for Multi-hop Explainable Fact Verification [22.785622371421876]
We present a pioneering dataset for multi-hop explainable fact verification.
With over 60,000 claims involving 2-hop and 3-hop reasoning, each is created by summarizing and modifying information from hyperlinked Wikipedia documents.
We demonstrate a novel baseline system on our EX-FEVER dataset, showcasing document retrieval, explanation generation, and claim verification.
arXiv Detail & Related papers (2023-10-15T06:46:15Z) - Give Me More Details: Improving Fact-Checking with Latent Retrieval [58.706972228039604]
Evidence plays a crucial role in automated fact-checking.
Existing fact-checking systems either assume the evidence sentences are given or use the search snippets returned by the search engine.
We propose to incorporate full text from source documents as evidence and introduce two enriched datasets.
arXiv Detail & Related papers (2023-05-25T15:01:19Z) - Decker: Double Check with Heterogeneous Knowledge for Commonsense Fact
Verification [80.31112722910787]
We propose Decker, a commonsense fact verification model that is capable of bridging heterogeneous knowledge.
Experimental results on two commonsense fact verification benchmark datasets, CSQA2.0 and CREAK demonstrate the effectiveness of our Decker.
arXiv Detail & Related papers (2023-05-10T06:28:16Z) - Read it Twice: Towards Faithfully Interpretable Fact Verification by
Revisiting Evidence [59.81749318292707]
We propose a fact verification model named ReRead to retrieve evidence and verify claim.
The proposed system is able to achieve significant improvements upon best-reported models under different settings.
arXiv Detail & Related papers (2023-05-02T03:23:14Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - GERE: Generative Evidence Retrieval for Fact Verification [57.78768817972026]
We propose GERE, the first system that retrieves evidences in a generative fashion.
The experimental results on the FEVER dataset show that GERE achieves significant improvements over the state-of-the-art baselines.
arXiv Detail & Related papers (2022-04-12T03:49:35Z) - AmbiFC: Fact-Checking Ambiguous Claims with Evidence [57.7091560922174]
We present AmbiFC, a fact-checking dataset with 10k claims derived from real-world information needs.
We analyze disagreements arising from ambiguity when comparing claims against evidence in AmbiFC.
We develop models for predicting veracity handling this ambiguity via soft labels.
arXiv Detail & Related papers (2021-04-01T17:40:08Z) - Hierarchical Evidence Set Modeling for Automated Fact Extraction and
Verification [5.836068916903788]
Hierarchical Evidence Set Modeling (HESM) is a framework to extract evidence sets and verify a claim to be supported, refuted or not enough info.
Our experimental results show that HESM outperforms 7 state-of-the-art methods for fact extraction and claim verification.
arXiv Detail & Related papers (2020-10-10T22:27:17Z) - DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim
Verification [16.144566353074314]
We propose a Decision Tree-based Co-Attention model (DTCA) to discover evidence for explainable claim verification.
Specifically, we first construct Decision Tree-based Evidence model (DTE) to select comments with high credibility as evidence in a transparent and interpretable way.
We then design Co-attention Self-attention networks (CaSa) to make the selected evidence interact with claims.
arXiv Detail & Related papers (2020-04-28T12:19:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.