Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge
- URL: http://arxiv.org/abs/2104.05216v1
- Date: Mon, 12 Apr 2021 05:52:20 GMT
- Title: Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge
- Authors: Yang Deng, Yuexiang Xie, Yaliang Li, Min Yang, Wai Lam, Ying Shen
- Abstract summary: We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
- Score: 77.77684299758494
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Answer selection, which is involved in many natural language processing
applications such as dialog systems and question answering (QA), is an
important yet challenging task in practice, since conventional methods
typically suffer from the issues of ignoring diverse real-world background
knowledge. In this paper, we extensively investigate approaches to enhancing
the answer selection model with external knowledge from knowledge graph (KG).
First, we present a context-knowledge interaction learning framework,
Knowledge-aware Neural Network (KNN), which learns the QA sentence
representations by considering a tight interaction with the external knowledge
from KG and the textual information. Then, we develop two kinds of
knowledge-aware attention mechanism to summarize both the context-based and
knowledge-based interactions between questions and answers. To handle the
diversity and complexity of KG information, we further propose a Contextualized
Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge
representation learning with structure information via a customized Graph
Convolutional Network (GCN) and comprehensively learns context-based and
knowledge-based sentence representation via the multi-view knowledge-aware
attention mechanism. We evaluate our method on four widely-used benchmark QA
datasets, including WikiQA, TREC QA, InsuranceQA and Yahoo QA. Results verify
the benefits of incorporating external knowledge from KG, and show the robust
superiority and extensive applicability of our method.
Related papers
- Knowledge Condensation and Reasoning for Knowledge-based VQA [20.808840633377343]
Recent studies retrieve the knowledge passages from external knowledge bases and then use them to answer questions.
We propose two synergistic models: Knowledge Condensation model and Knowledge Reasoning model.
Our method achieves state-of-the-art performance on knowledge-based VQA datasets.
arXiv Detail & Related papers (2024-03-15T06:06:06Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model [63.461030694700014]
We propose a novel dual knowledge-enhanced generative pretrained language model for multimodal task-oriented dialog systems (DKMD)
The proposed DKMD consists of three key components: dual knowledge selection, dual knowledge-enhanced context learning, and knowledge-enhanced response generation.
Experiments on a public dataset verify the superiority of the proposed DKMD over state-of-the-art competitors.
arXiv Detail & Related papers (2022-07-16T13:02:54Z) - VQA-GNN: Reasoning with Multimodal Knowledge via Graph Neural Networks
for Visual Question Answering [79.22069768972207]
We propose VQA-GNN, a new VQA model that performs bidirectional fusion between unstructured and structured multimodal knowledge to obtain unified knowledge representations.
Specifically, we inter-connect the scene graph and the concept graph through a super node that represents the QA context.
On two challenging VQA tasks, our method outperforms strong baseline VQA methods by 3.2% on VCR and 4.6% on GQA, suggesting its strength in performing concept-level reasoning.
arXiv Detail & Related papers (2022-05-23T17:55:34Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z) - Coarse-to-Careful: Seeking Semantic-related Knowledge for Open-domain
Commonsense Question Answering [12.406729445165857]
It is prevalent to utilize external knowledge to help machine answer questions that need background commonsense.
We propose a semantic-driven knowledge-aware QA framework, which controls the knowledge injection in a coarse-to-careful fashion.
arXiv Detail & Related papers (2021-07-04T10:56:36Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.