Coarse-to-Careful: Seeking Semantic-related Knowledge for Open-domain
Commonsense Question Answering
- URL: http://arxiv.org/abs/2107.01592v1
- Date: Sun, 4 Jul 2021 10:56:36 GMT
- Title: Coarse-to-Careful: Seeking Semantic-related Knowledge for Open-domain
Commonsense Question Answering
- Authors: Luxi Xing, Yue Hu, Jing Yu, Yuqiang Xie, Wei Peng
- Abstract summary: It is prevalent to utilize external knowledge to help machine answer questions that need background commonsense.
We propose a semantic-driven knowledge-aware QA framework, which controls the knowledge injection in a coarse-to-careful fashion.
- Score: 12.406729445165857
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is prevalent to utilize external knowledge to help machine answer
questions that need background commonsense, which faces a problem that
unlimited knowledge will transmit noisy and misleading information. Towards the
issue of introducing related knowledge, we propose a semantic-driven
knowledge-aware QA framework, which controls the knowledge injection in a
coarse-to-careful fashion. We devise a tailoring strategy to filter extracted
knowledge under monitoring of the coarse semantic of question on the knowledge
extraction stage. And we develop a semantic-aware knowledge fetching module
that engages structural knowledge information and fuses proper knowledge
according to the careful semantic of questions in a hierarchical way.
Experiments demonstrate that the proposed approach promotes the performance on
the CommonsenseQA dataset comparing with strong baselines.
Related papers
- Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Rainier: Reinforced Knowledge Introspector for Commonsense Question
Answering [74.90418840431425]
We present Rainier, or Reinforced Knowledge Introspector, that learns to generate contextually relevant knowledge in response to given questions.
Our approach starts by imitating knowledge generated by GPT-3, then learns to generate its own knowledge via reinforcement learning.
Our work is the first to report that knowledge generated by models that are orders of magnitude smaller than GPT-3, even without direct supervision on the knowledge itself, can exceed the quality of knowledge elicited from GPT-3 for commonsense QA.
arXiv Detail & Related papers (2022-10-06T17:34:06Z) - Uncertainty-based Visual Question Answering: Estimating Semantic
Inconsistency between Image and Knowledge Base [0.7081604594416336]
KVQA task aims to answer questions that require additional external knowledge as well as an understanding of images and questions.
Recent studies on KVQA inject an external knowledge in a multi-modal form, and as more knowledge is used, irrelevant information may be added and can confuse the question answering.
arXiv Detail & Related papers (2022-07-27T01:58:29Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - Incremental Knowledge Based Question Answering [52.041815783025186]
We propose a new incremental KBQA learning framework that can progressively expand learning capacity as humans do.
Specifically, it comprises a margin-distilled loss and a collaborative selection method, to overcome the catastrophic forgetting problem.
The comprehensive experiments demonstrate its effectiveness and efficiency when working with the evolving knowledge base.
arXiv Detail & Related papers (2021-01-18T09:03:38Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z) - Question Answering over Knowledge Base using Language Model Embeddings [0.0]
This paper focuses on using a pre-trained language model for the Knowledge Base Question Answering task.
We further fine-tuned these embeddings with a two-way attention mechanism from the knowledge base to the asked question.
Our method is based on a simple Convolutional Neural Network architecture with a Multi-Head Attention mechanism to represent the asked question.
arXiv Detail & Related papers (2020-10-17T22:59:34Z) - Knowledge Fusion and Semantic Knowledge Ranking for Open Domain Question
Answering [33.920269584939334]
Open Domain Question Answering requires systems to retrieve external knowledge and perform multi-hop reasoning.
We learn a semantic knowledge ranking model to re-rank knowledge retrieved through Lucene based information retrieval systems.
We propose a "knowledge fusion model" which leverages knowledge in BERT-based language models with externally retrieved knowledge.
arXiv Detail & Related papers (2020-04-07T03:16:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.