Improving Question Answering over Knowledge Graphs Using Graph
Summarization
- URL: http://arxiv.org/abs/2203.13570v1
- Date: Fri, 25 Mar 2022 10:57:10 GMT
- Title: Improving Question Answering over Knowledge Graphs Using Graph
Summarization
- Authors: Sirui Li, Kok Kai Wong, Dengya Zhu, Chun Che Fung
- Abstract summary: Key idea is to represent questions and entities of a Knowledge Graph as low-dimensional embeddings.
We propose a graph summarization technique using Recurrent Convolutional Neural Network (RCNN) and GCN.
The proposed graph summarization technique can be used to tackle the issue that KGQAs cannot answer questions with an uncertain number of answers.
- Score: 0.2752817022620644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Question Answering (QA) systems over Knowledge Graphs (KGs) (KGQA)
automatically answer natural language questions using triples contained in a
KG. The key idea is to represent questions and entities of a KG as
low-dimensional embeddings. Previous KGQAs have attempted to represent entities
using Knowledge Graph Embedding (KGE) and Deep Learning (DL) methods. However,
KGEs are too shallow to capture the expressive features and DL methods process
each triple independently. Recently, Graph Convolutional Network (GCN) has
shown to be excellent in providing entity embeddings. However, using GCNs to
KGQAs is inefficient because GCNs treat all relations equally when aggregating
neighbourhoods. Also, a problem could occur when using previous KGQAs: in most
cases, questions often have an uncertain number of answers. To address the
above issues, we propose a graph summarization technique using Recurrent
Convolutional Neural Network (RCNN) and GCN. The combination of GCN and RCNN
ensures that the embeddings are propagated together with the relations relevant
to the question, and thus better answers. The proposed graph summarization
technique can be used to tackle the issue that KGQAs cannot answer questions
with an uncertain number of answers. In this paper, we demonstrated the
proposed technique on the most common type of questions, which is
single-relation questions. Experiments have demonstrated that the proposed
graph summarization technique using RCNN and GCN can provide better results
when compared to the GCN. The proposed graph summarization technique
significantly improves the recall of actual answers when the questions have an
uncertain number of answers.
Related papers
- One Model, Any Conjunctive Query: Graph Neural Networks for Answering Complex Queries over Knowledge Graphs [7.34044245579928]
We propose AnyCQ, a graph neural network model that can classify answers to any conjunctive query on any knowledge graph.
We show that AnyCQ can generalize to large queries of arbitrary structure, reliably classifying and retrieving answers to samples where existing approaches fail.
arXiv Detail & Related papers (2024-09-21T00:30:44Z) - QAGCF: Graph Collaborative Filtering for Q&A Recommendation [58.21387109664593]
Question and answer (Q&A) platforms usually recommend question-answer pairs to meet users' knowledge acquisition needs.
This makes user behaviors more complex, and presents two challenges for Q&A recommendation.
We introduce Question & Answer Graph Collaborative Filtering (QAGCF), a graph neural network model that creates separate graphs for collaborative and semantic views.
arXiv Detail & Related papers (2024-06-07T10:52:37Z) - GNN2R: Weakly-Supervised Rationale-Providing Question Answering over
Knowledge Graphs [13.496565392976292]
We propose a novel Graph Neural Network-based Two-Step Reasoning model (GNN2R) to solve this issue.
GNN2R can provide both final answers and reasoning subgraphs as a rationale behind final answers efficiently with only weak supervision.
arXiv Detail & Related papers (2023-12-04T19:58:07Z) - Graph Reasoning for Question Answering with Triplet Retrieval [33.454090126152714]
We propose a simple yet effective method to retrieve the most relevant triplets from knowledge graphs (KGs)
Our method can outperform state-of-the-art up to 4.6% absolute accuracy.
arXiv Detail & Related papers (2023-05-30T04:46:28Z) - Logical Message Passing Networks with One-hop Inference on Atomic
Formulas [57.47174363091452]
We propose a framework for complex query answering that decomposes the Knowledge Graph embeddings from neural set operators.
On top of the query graph, we propose the Logical Message Passing Neural Network (LMPNN) that connects the local one-hop inferences on atomic formulas to the global logical reasoning.
Our approach yields the new state-of-the-art neural CQA model.
arXiv Detail & Related papers (2023-01-21T02:34:06Z) - Dynamic Relevance Graph Network for Knowledge-Aware Question Answering [22.06211725256875]
This work investigates the challenge of learning and reasoning for Commonsense Question Answering given an external source of knowledge.
We propose a novel graph neural network architecture, called Dynamic Relevance Graph Network (DRGN)
DRGN operates on a given KG subgraph based on the question and answers entities and uses the relevance scores between the nodes to establish new edges.
arXiv Detail & Related papers (2022-09-20T18:52:05Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - QAGCN: Answering Multi-Relation Questions via Single-Step Implicit Reasoning over Knowledge Graphs [12.354648004427824]
Multi-relation question answering (QA) is a challenging task.
Recent methods with explicit multi-step reasoning over KGs have been prominently used in this task.
We argue that multi-relation QA can be achieved via end-to-end single-step implicit reasoning.
arXiv Detail & Related papers (2022-06-03T21:01:48Z) - QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering [122.84513233992422]
We propose a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs)
We show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning.
arXiv Detail & Related papers (2021-04-13T17:32:51Z) - Graph-Based Tri-Attention Network for Answer Ranking in CQA [56.42018099917321]
We propose a novel graph-based tri-attention network, namely GTAN, to generate answer ranking scores.
Experiments on three real-world CQA datasets demonstrate GTAN significantly outperforms state-of-the-art answer ranking methods.
arXiv Detail & Related papers (2021-03-05T10:40:38Z) - Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks [53.58077686470096]
Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
arXiv Detail & Related papers (2020-04-13T15:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.