Decker: Double Check with Heterogeneous Knowledge for Commonsense Fact
Verification
- URL: http://arxiv.org/abs/2305.05921v2
- Date: Sat, 27 May 2023 08:49:05 GMT
- Title: Decker: Double Check with Heterogeneous Knowledge for Commonsense Fact
Verification
- Authors: Anni Zou, Zhuosheng Zhang and Hai Zhao
- Abstract summary: We propose Decker, a commonsense fact verification model that is capable of bridging heterogeneous knowledge.
Experimental results on two commonsense fact verification benchmark datasets, CSQA2.0 and CREAK demonstrate the effectiveness of our Decker.
- Score: 80.31112722910787
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Commonsense fact verification, as a challenging branch of commonsense
question-answering (QA), aims to verify through facts whether a given
commonsense claim is correct or not. Answering commonsense questions
necessitates a combination of knowledge from various levels. However, existing
studies primarily rest on grasping either unstructured evidence or potential
reasoning paths from structured knowledge bases, yet failing to exploit the
benefits of heterogeneous knowledge simultaneously. In light of this, we
propose Decker, a commonsense fact verification model that is capable of
bridging heterogeneous knowledge by uncovering latent relationships between
structured and unstructured knowledge. Experimental results on two commonsense
fact verification benchmark datasets, CSQA2.0 and CREAK demonstrate the
effectiveness of our Decker and further analysis verifies its capability to
seize more precious information through reasoning.
Related papers
- What Really is Commonsense Knowledge? [58.5342212738895]
We survey existing definitions of commonsense knowledge, ground into the three frameworks for defining concepts, and consolidate them into a unified definition of commonsense knowledge.
We then use the consolidated definition for annotations and experiments on the CommonsenseQA and CommonsenseQA 2.0 datasets.
Our study shows that there exists a large portion of non-commonsense-knowledge instances in the two datasets, and a large performance gap on these two subsets.
arXiv Detail & Related papers (2024-11-06T14:54:19Z) - Knowledge Localization: Mission Not Accomplished? Enter Query Localization! [19.16542466297147]
The Knowledge Neuron (KN) thesis is a prominent theory for explaining these mechanisms.
We re-examine the knowledge localization (KL) assumption and confirm the existence of facts that do not adhere to it from both statistical and knowledge modification perspectives.
We propose the Consistency-Aware KN modification method, which improves the performance of knowledge modification.
arXiv Detail & Related papers (2024-05-23T02:44:12Z) - Causal Discovery with Language Models as Imperfect Experts [119.22928856942292]
We consider how expert knowledge can be used to improve the data-driven identification of causal graphs.
We propose strategies for amending such expert knowledge based on consistency properties.
We report a case study, on real data, where a large language model is used as an imperfect expert.
arXiv Detail & Related papers (2023-07-05T16:01:38Z) - DisentQA: Disentangling Parametric and Contextual Knowledge with
Counterfactual Question Answering [34.70206857546496]
Question answering models commonly have access to two sources of "knowledge" during inference time.
It is unclear whether the answer stems from the given non-parametric knowledge or not.
We propose a new paradigm in which QA models are trained to disentangle the two sources of knowledge.
arXiv Detail & Related papers (2022-11-10T15:34:44Z) - CIKQA: Learning Commonsense Inference with a Unified
Knowledge-in-the-loop QA Paradigm [120.98789964518562]
We argue that due to the large scale of commonsense knowledge, it is infeasible to annotate a large enough training set for each task to cover all commonsense for learning.
We focus on investigating models' commonsense inference capabilities from two perspectives.
We name the benchmark as Commonsense Inference with Knowledge-in-the-loop Question Answering (CIKQA)
arXiv Detail & Related papers (2022-10-12T14:32:39Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Topic-Aware Evidence Reasoning and Stance-Aware Aggregation for Fact
Verification [19.130541561303293]
We propose a novel topic-aware evidence reasoning and stance-aware aggregation model for fact verification.
Tests conducted on two benchmark datasets demonstrate the superiority of the proposed model over several state-of-the-art approaches for fact verification.
arXiv Detail & Related papers (2021-06-02T14:33:12Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z) - DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim
Verification [16.144566353074314]
We propose a Decision Tree-based Co-Attention model (DTCA) to discover evidence for explainable claim verification.
Specifically, we first construct Decision Tree-based Evidence model (DTE) to select comments with high credibility as evidence in a transparent and interpretable way.
We then design Co-attention Self-attention networks (CaSa) to make the selected evidence interact with claims.
arXiv Detail & Related papers (2020-04-28T12:19:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.