Benchmarking Knowledge-Enhanced Commonsense Question Answering via
Knowledge-to-Text Transformation
- URL: http://arxiv.org/abs/2101.00760v2
- Date: Tue, 5 Jan 2021 03:32:41 GMT
- Title: Benchmarking Knowledge-Enhanced Commonsense Question Answering via
Knowledge-to-Text Transformation
- Authors: Ning Bian, Xianpei Han, Bo Chen, Le Sun
- Abstract summary: We investigate how far can we get by exploiting external knowledge for Commonsense Question Answering.
We benchmark knowledge-enhanced CQA using a simple and effective knowledge-to-text transformation framework.
Experiments show that our knowledge-to-text framework is effective and state-of-the-art performance on CommonsenseQA dataset.
- Score: 30.38055266965927
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A fundamental ability of humans is to utilize commonsense knowledge in
language understanding and question answering. In recent years, many
knowledge-enhanced Commonsense Question Answering (CQA) approaches have been
proposed. However, it remains unclear: (1) How far can we get by exploiting
external knowledge for CQA? (2) How much potential of knowledge has been
exploited in current CQA models? (3) Which are the most promising directions
for future CQA? To answer these questions, we benchmark knowledge-enhanced CQA
by conducting extensive experiments on multiple standard CQA datasets using a
simple and effective knowledge-to-text transformation framework. Experiments
show that: (1) Our knowledge-to-text framework is effective and achieves
state-of-the-art performance on CommonsenseQA dataset, providing a simple and
strong knowledge-enhanced baseline for CQA; (2) The potential of knowledge is
still far from being fully exploited in CQA -- there is a significant
performance gap from current models to our models with golden knowledge; and
(3) Context-sensitive knowledge selection, heterogeneous knowledge
exploitation, and commonsense-rich language models are promising CQA
directions.
Related papers
- Knowledge Condensation and Reasoning for Knowledge-based VQA [20.808840633377343]
Recent studies retrieve the knowledge passages from external knowledge bases and then use them to answer questions.
We propose two synergistic models: Knowledge Condensation model and Knowledge Reasoning model.
Our method achieves state-of-the-art performance on knowledge-based VQA datasets.
arXiv Detail & Related papers (2024-03-15T06:06:06Z) - Knowledge Generation for Zero-shot Knowledge-based VQA [20.674979268279728]
Previous solutions to knowledge-based visual question answering(K-VQA) retrieve knowledge from external knowledge bases and use supervised learning to train the K-VQA model.
We propose and test a similar knowledge-generation-based K-VQA method, which first generates knowledge from an LLM and then incorporates the generated knowledge for K-VQA in a zero-shot manner.
arXiv Detail & Related papers (2024-02-04T15:41:35Z) - ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models [19.85526116658481]
We introduce ChatKBQA, a novel and simple generate-then-retrieve KBQA framework.
Experimental results show that ChatKBQA achieves new state-of-the-art performance on standard KBQA datasets.
This work can also be regarded as a new paradigm for combining LLMs with knowledge graphs for interpretable and knowledge-required question answering.
arXiv Detail & Related papers (2023-10-13T09:45:14Z) - Distinguish Before Answer: Generating Contrastive Explanation as
Knowledge for Commonsense Question Answering [61.53454387743701]
We propose CPACE, a concept-centric Prompt-bAsed Contrastive Explanation Generation model.
CPACE converts obtained symbolic knowledge into a contrastive explanation for better distinguishing the differences among given candidates.
We conduct a series of experiments on three widely-used question-answering datasets: CSQA, QASC, and OBQA.
arXiv Detail & Related papers (2023-05-14T12:12:24Z) - Utilizing Background Knowledge for Robust Reasoning over Traffic
Situations [63.45021731775964]
We focus on a complementary research aspect of Intelligent Transportation: traffic understanding.
We scope our study to text-based methods and datasets given the abundant commonsense knowledge.
We adopt three knowledge-driven approaches for zero-shot QA over traffic situations.
arXiv Detail & Related papers (2022-12-04T09:17:24Z) - Rainier: Reinforced Knowledge Introspector for Commonsense Question
Answering [74.90418840431425]
We present Rainier, or Reinforced Knowledge Introspector, that learns to generate contextually relevant knowledge in response to given questions.
Our approach starts by imitating knowledge generated by GPT-3, then learns to generate its own knowledge via reinforcement learning.
Our work is the first to report that knowledge generated by models that are orders of magnitude smaller than GPT-3, even without direct supervision on the knowledge itself, can exceed the quality of knowledge elicited from GPT-3 for commonsense QA.
arXiv Detail & Related papers (2022-10-06T17:34:06Z) - Asking for Knowledge: Training RL Agents to Query External Knowledge
Using Language [121.56329458876655]
We introduce two new environments: the grid-world-based Q-BabyAI and the text-based Q-TextWorld.
We propose the "Asking for Knowledge" (AFK) agent, which learns to generate language commands to query for meaningful knowledge.
arXiv Detail & Related papers (2022-05-12T14:20:31Z) - Enhancing Question Generation with Commonsense Knowledge [33.289599417096206]
We propose a multi-task learning framework to introduce commonsense knowledge into question generation process.
Experimental results on SQuAD show that our proposed methods are able to noticeably improve the QG performance on both automatic and human evaluation metrics.
arXiv Detail & Related papers (2021-06-19T08:58:13Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.