DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim
Verification
- URL: http://arxiv.org/abs/2004.13455v1
- Date: Tue, 28 Apr 2020 12:19:46 GMT
- Title: DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim
Verification
- Authors: Lianwei Wu, Yuan Rao, Yongqiang Zhao, Hao Liang, Ambreen Nazir
- Abstract summary: We propose a Decision Tree-based Co-Attention model (DTCA) to discover evidence for explainable claim verification.
Specifically, we first construct Decision Tree-based Evidence model (DTE) to select comments with high credibility as evidence in a transparent and interpretable way.
We then design Co-attention Self-attention networks (CaSa) to make the selected evidence interact with claims.
- Score: 16.144566353074314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, many methods discover effective evidence from reliable sources by
appropriate neural networks for explainable claim verification, which has been
widely recognized. However, in these methods, the discovery process of evidence
is nontransparent and unexplained. Simultaneously, the discovered evidence only
roughly aims at the interpretability of the whole sequence of claims but
insufficient to focus on the false parts of claims. In this paper, we propose a
Decision Tree-based Co-Attention model (DTCA) to discover evidence for
explainable claim verification. Specifically, we first construct Decision
Tree-based Evidence model (DTE) to select comments with high credibility as
evidence in a transparent and interpretable way. Then we design Co-attention
Self-attention networks (CaSa) to make the selected evidence interact with
claims, which is for 1) training DTE to determine the optimal decision
thresholds and obtain more powerful evidence; and 2) utilizing the evidence to
find the false parts in the claim. Experiments on two public datasets,
RumourEval and PHEME, demonstrate that DTCA not only provides explanations for
the results of claim verification but also achieves the state-of-the-art
performance, boosting the F1-score by 3.11%, 2.41%, respectively.
Related papers
- Robust Claim Verification Through Fact Detection [17.29665711917281]
Our novel approach, FactDetect, leverages Large Language Models (LLMs) to generate concise factual statements from evidence.
The generated facts are then combined with the claim and evidence.
Our method demonstrates competitive results in the supervised claim verification model by 15% on the F1 score.
arXiv Detail & Related papers (2024-07-25T20:03:43Z) - Navigating the Noisy Crowd: Finding Key Information for Claim Verification [19.769771741059408]
We propose EACon, a framework designed to find key information within evidence and verify each aspect of a claim separately.
Eccon finds keywords from the claim and employs fuzzy matching to select relevant keywords for each raw evidence piece.
Eccon deconstructs the original claim into subclaims, which are then verified against both abstracted and raw evidence individually.
arXiv Detail & Related papers (2024-07-17T09:24:10Z) - Give Me More Details: Improving Fact-Checking with Latent Retrieval [58.706972228039604]
Evidence plays a crucial role in automated fact-checking.
Existing fact-checking systems either assume the evidence sentences are given or use the search snippets returned by the search engine.
We propose to incorporate full text from source documents as evidence and introduce two enriched datasets.
arXiv Detail & Related papers (2023-05-25T15:01:19Z) - Complex Claim Verification with Evidence Retrieved in the Wild [73.19998942259073]
We present the first fully automated pipeline to check real-world claims by retrieving raw evidence from the web.
Our pipeline includes five components: claim decomposition, raw document retrieval, fine-grained evidence retrieval, claim-focused summarization, and veracity judgment.
arXiv Detail & Related papers (2023-05-19T17:49:19Z) - Decker: Double Check with Heterogeneous Knowledge for Commonsense Fact
Verification [80.31112722910787]
We propose Decker, a commonsense fact verification model that is capable of bridging heterogeneous knowledge.
Experimental results on two commonsense fact verification benchmark datasets, CSQA2.0 and CREAK demonstrate the effectiveness of our Decker.
arXiv Detail & Related papers (2023-05-10T06:28:16Z) - Read it Twice: Towards Faithfully Interpretable Fact Verification by
Revisiting Evidence [59.81749318292707]
We propose a fact verification model named ReRead to retrieve evidence and verify claim.
The proposed system is able to achieve significant improvements upon best-reported models under different settings.
arXiv Detail & Related papers (2023-05-02T03:23:14Z) - GERE: Generative Evidence Retrieval for Fact Verification [57.78768817972026]
We propose GERE, the first system that retrieves evidences in a generative fashion.
The experimental results on the FEVER dataset show that GERE achieves significant improvements over the state-of-the-art baselines.
arXiv Detail & Related papers (2022-04-12T03:49:35Z) - Topic-Aware Evidence Reasoning and Stance-Aware Aggregation for Fact
Verification [19.130541561303293]
We propose a novel topic-aware evidence reasoning and stance-aware aggregation model for fact verification.
Tests conducted on two benchmark datasets demonstrate the superiority of the proposed model over several state-of-the-art approaches for fact verification.
arXiv Detail & Related papers (2021-06-02T14:33:12Z) - AmbiFC: Fact-Checking Ambiguous Claims with Evidence [57.7091560922174]
We present AmbiFC, a fact-checking dataset with 10k claims derived from real-world information needs.
We analyze disagreements arising from ambiguity when comparing claims against evidence in AmbiFC.
We develop models for predicting veracity handling this ambiguity via soft labels.
arXiv Detail & Related papers (2021-04-01T17:40:08Z) - Hierarchical Evidence Set Modeling for Automated Fact Extraction and
Verification [5.836068916903788]
Hierarchical Evidence Set Modeling (HESM) is a framework to extract evidence sets and verify a claim to be supported, refuted or not enough info.
Our experimental results show that HESM outperforms 7 state-of-the-art methods for fact extraction and claim verification.
arXiv Detail & Related papers (2020-10-10T22:27:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.