CIKQA: Learning Commonsense Inference with a Unified
Knowledge-in-the-loop QA Paradigm
- URL: http://arxiv.org/abs/2210.06246v1
- Date: Wed, 12 Oct 2022 14:32:39 GMT
- Title: CIKQA: Learning Commonsense Inference with a Unified
Knowledge-in-the-loop QA Paradigm
- Authors: Hongming Zhang, Yintong Huo, Yanai Elazar, Yangqiu Song, Yoav
Goldberg, Dan Roth
- Abstract summary: We argue that due to the large scale of commonsense knowledge, it is infeasible to annotate a large enough training set for each task to cover all commonsense for learning.
We focus on investigating models' commonsense inference capabilities from two perspectives.
We name the benchmark as Commonsense Inference with Knowledge-in-the-loop Question Answering (CIKQA)
- Score: 120.98789964518562
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, the community has achieved substantial progress on many commonsense
reasoning benchmarks. However, it is still unclear what is learned from the
training process: the knowledge, inference capability, or both? We argue that
due to the large scale of commonsense knowledge, it is infeasible to annotate a
large enough training set for each task to cover all commonsense for learning.
Thus we should separate the commonsense knowledge acquisition and inference
over commonsense knowledge as two separate tasks. In this work, we focus on
investigating models' commonsense inference capabilities from two perspectives:
(1) Whether models can know if the knowledge they have is enough to solve the
task; (2) Whether models can develop commonsense inference capabilities that
generalize across commonsense tasks. We first align commonsense tasks with
relevant knowledge from commonsense knowledge bases and ask humans to annotate
whether the knowledge is enough or not. Then, we convert different commonsense
tasks into a unified question answering format to evaluate models'
generalization capabilities. We name the benchmark as Commonsense Inference
with Knowledge-in-the-loop Question Answering (CIKQA).
Related papers
- What Really is Commonsense Knowledge? [58.5342212738895]
We survey existing definitions of commonsense knowledge, ground into the three frameworks for defining concepts, and consolidate them into a unified definition of commonsense knowledge.
We then use the consolidated definition for annotations and experiments on the CommonsenseQA and CommonsenseQA 2.0 datasets.
Our study shows that there exists a large portion of non-commonsense-knowledge instances in the two datasets, and a large performance gap on these two subsets.
arXiv Detail & Related papers (2024-11-06T14:54:19Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Multi-hop Commonsense Knowledge Injection Framework for Zero-Shot
Commonsense Question Answering [6.086719709100659]
We propose a novel multi-hop commonsense knowledge injection framework.
Our framework achieves state-of-art performance on five commonsense question answering benchmarks.
arXiv Detail & Related papers (2023-05-10T07:13:47Z) - DisentQA: Disentangling Parametric and Contextual Knowledge with
Counterfactual Question Answering [34.70206857546496]
Question answering models commonly have access to two sources of "knowledge" during inference time.
It is unclear whether the answer stems from the given non-parametric knowledge or not.
We propose a new paradigm in which QA models are trained to disentangle the two sources of knowledge.
arXiv Detail & Related papers (2022-11-10T15:34:44Z) - ArT: All-round Thinker for Unsupervised Commonsense Question-Answering [54.068032948300655]
We propose an approach of All-round Thinker (ArT) by fully taking association during knowledge generating.
We evaluate it on three commonsense QA benchmarks: COPA, SocialIQA and SCT.
arXiv Detail & Related papers (2021-12-26T18:06:44Z) - Enhancing Question Generation with Commonsense Knowledge [33.289599417096206]
We propose a multi-task learning framework to introduce commonsense knowledge into question generation process.
Experimental results on SQuAD show that our proposed methods are able to noticeably improve the QG performance on both automatic and human evaluation metrics.
arXiv Detail & Related papers (2021-06-19T08:58:13Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z) - Common Sense or World Knowledge? Investigating Adapter-Based Knowledge
Injection into Pretrained Transformers [54.417299589288184]
We investigate models for complementing the distributional knowledge of BERT with conceptual knowledge from ConceptNet and its corresponding Open Mind Common Sense (OMCS) corpus.
Our adapter-based models substantially outperform BERT on inference tasks that require the type of conceptual knowledge explicitly present in ConceptNet and OMCS.
arXiv Detail & Related papers (2020-05-24T15:49:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.