CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense
Question Answering
- URL: http://arxiv.org/abs/2305.14869v2
- Date: Fri, 20 Oct 2023 04:49:21 GMT
- Title: CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense
Question Answering
- Authors: Weiqi Wang, Tianqing Fang, Wenxuan Ding, Baixuan Xu, Xin Liu, Yangqiu
Song, Antoine Bosselut
- Abstract summary: We propose Conceptualization-Augmented Reasoner (CAR) to tackle the task of zero-shot commonsense question answering.
CAR abstracts a commonsense knowledge triple to many higher-level instances, which increases the coverage of CommonSense Knowledge Bases.
CAR more robustly generalizes to answering questions about zero-shot commonsense scenarios than existing methods.
- Score: 56.592385613002584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of zero-shot commonsense question answering evaluates models on
their capacity to reason about general scenarios beyond those presented in
specific datasets. Existing approaches for tackling this task leverage external
knowledge from CommonSense Knowledge Bases (CSKBs) by pretraining the model on
synthetic QA pairs constructed from CSKBs. In these approaches, negative
examples (distractors) are formulated by randomly sampling from CSKBs using
fairly primitive keyword constraints. However, two bottlenecks limit these
approaches: the inherent incompleteness of CSKBs limits the semantic coverage
of synthetic QA pairs, and the lack of human annotations makes the sampled
negative examples potentially uninformative and contradictory. To tackle these
limitations above, we propose Conceptualization-Augmented Reasoner (CAR), a
zero-shot commonsense question-answering framework that fully leverages the
power of conceptualization. Specifically, CAR abstracts a commonsense knowledge
triple to many higher-level instances, which increases the coverage of CSKB and
expands the ground-truth answer space, reducing the likelihood of selecting
false-negative distractors. Extensive experiments demonstrate that CAR more
robustly generalizes to answering questions about zero-shot commonsense
scenarios than existing methods, including large language models, such as
GPT3.5 and ChatGPT. Our codes, data, and model checkpoints are available at
https://github.com/HKUST-KnowComp/CAR.
Related papers
- ConstraintChecker: A Plugin for Large Language Models to Reason on
Commonsense Knowledge Bases [53.29427395419317]
Reasoning over Commonsense Knowledge Bases (CSKB) has been explored as a way to acquire new commonsense knowledge.
We propose **ConstraintChecker**, a plugin over prompting techniques to provide and check explicit constraints.
arXiv Detail & Related papers (2024-01-25T08:03:38Z) - From Chaos to Clarity: Claim Normalization to Empower Fact-Checking [57.024192702939736]
Claim Normalization (aka ClaimNorm) aims to decompose complex and noisy social media posts into more straightforward and understandable forms.
We propose CACN, a pioneering approach that leverages chain-of-thought and claim check-worthiness estimation.
Our experiments demonstrate that CACN outperforms several baselines across various evaluation measures.
arXiv Detail & Related papers (2023-10-22T16:07:06Z) - QADYNAMICS: Training Dynamics-Driven Synthetic QA Diagnostic for
Zero-Shot Commonsense Question Answering [48.25449258017601]
State-of-the-art approaches fine-tune language models on QA pairs constructed from CommonSense Knowledge Bases.
We propose QADYNAMICS, a training dynamics-driven framework for QA diagnostics and refinement.
arXiv Detail & Related papers (2023-10-17T14:27:34Z) - Advanced Semantics for Commonsense Knowledge Extraction [32.43213645631101]
Commonsense knowledge (CSK) about concepts and their properties is useful for AI applications such as robust chatbots.
This paper presents a methodology, called Ascent, to automatically build a large-scale knowledge base (KB) of CSK assertions.
Ascent goes beyond triples by capturing composite concepts with subgroups and aspects, and by refining assertions with semantic facets.
arXiv Detail & Related papers (2020-11-02T11:37:17Z) - Counterfactual Variable Control for Robust and Interpretable Question
Answering [57.25261576239862]
Deep neural network based question answering (QA) models are neither robust nor explainable in many cases.
In this paper, we inspect such spurious "capability" of QA models using causal inference.
We propose a novel approach called Counterfactual Variable Control (CVC) that explicitly mitigates any shortcut correlation.
arXiv Detail & Related papers (2020-10-12T10:09:05Z) - BoxE: A Box Embedding Model for Knowledge Base Completion [53.57588201197374]
Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB)
Existing embedding models are subject to at least one of the following limitations.
BoxE embeds entities as points, and relations as a set of hyper-rectangles (or boxes)
arXiv Detail & Related papers (2020-07-13T09:40:49Z) - Joint Reasoning for Multi-Faceted Commonsense Knowledge [28.856786775318486]
Commonsense knowledge (CSK) supports a variety of AI applications, from visual understanding to chatbots.
Prior works on acquiring CSK have compiled statements that associate concepts, like everyday objects or activities, with properties that hold for most or some instances of the concept.
This paper introduces a multi-faceted model of CSK statements and methods for joint reasoning over sets of inter-related statements.
arXiv Detail & Related papers (2020-01-13T11:34:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.