Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks
- URL: http://arxiv.org/abs/2004.06015v4
- Date: Mon, 1 May 2023 03:15:49 GMT
- Title: Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks
- Authors: Yu Chen, Lingfei Wu and Mohammed J. Zaki
- Abstract summary: Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
- Score: 53.58077686470096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph (KG) question generation (QG) aims to generate natural
language questions from KGs and target answers. Previous works mostly focus on
a simple setting which is to generate questions from a single KG triple. In
this work, we focus on a more realistic setting where we aim to generate
questions from a KG subgraph and target answers. In addition, most of previous
works built on either RNN-based or Transformer based models to encode a
linearized KG sugraph, which totally discards the explicit structure
information of a KG subgraph. To address this issue, we propose to apply a
bidirectional Graph2Seq model to encode the KG subgraph. Furthermore, we
enhance our RNN decoder with node-level copying mechanism to allow directly
copying node attributes from the KG subgraph to the output question. Both
automatic and human evaluation results demonstrate that our model achieves new
state-of-the-art scores, outperforming existing methods by a significant margin
on two QG benchmarks. Experimental results also show that our QG model can
consistently benefit the Question Answering (QA) task as a mean of data
augmentation.
Related papers
- GNN-RAG: Graph Neural Retrieval for Large Language Model Reasoning [21.057810495833063]
We introduce GNN-RAG, a novel method for combining language understanding abilities of LLMs with the reasoning abilities of GNNs in a retrieval-augmented generation (RAG) style.
In our GNN-RAG framework, the GNN acts as a dense subgraph reasoner to extract useful graph information.
Experiments show that GNN-RAG achieves state-of-the-art performance in two widely used KGQA benchmarks.
arXiv Detail & Related papers (2024-05-30T15:14:24Z) - Task-Oriented GNNs Training on Large Knowledge Graphs for Accurate and Efficient Modeling [5.460112864687281]
This paper proposes KG-TOSA, an approach to automate the TOSG extraction for task-oriented HGNN training on a large Knowledge Graph (KG)
KG-TOSA helps state-of-the-art HGNN methods reduce training time and memory usage by up to 70% while improving the model performance, e.g., accuracy and inference time.
arXiv Detail & Related papers (2024-03-09T01:17:26Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - GrapeQA: GRaph Augmentation and Pruning to Enhance Question-Answering [19.491275771319074]
Commonsense question-answering (QA) methods combine the power of pre-trained Language Models (LM) with the reasoning provided by Knowledge Graphs (KG)
A typical approach collects nodes relevant to the QA pair from a KG to form a Working Graph followed by reasoning using Graph Neural Networks(GNNs)
We propose GrapeQA with two simple improvements on the WG: (i) Prominent Entities for Graph Augmentation identifies relevant text chunks from the QA pair and augments the WG with corresponding latent representations from the LM, and (ii) Context-Aware Node Pruning removes nodes that are less relevant to the QA
arXiv Detail & Related papers (2023-03-22T05:35:29Z) - Deep Bidirectional Language-Knowledge Graph Pretraining [159.9645181522436]
DRAGON is a self-supervised approach to pretraining a deeply joint language-knowledge foundation model from text and KG at scale.
Our model takes pairs of text segments and relevant KG subgraphs as input and bidirectionally fuses information from both modalities.
arXiv Detail & Related papers (2022-10-17T18:02:52Z) - Dynamic Relevance Graph Network for Knowledge-Aware Question Answering [22.06211725256875]
This work investigates the challenge of learning and reasoning for Commonsense Question Answering given an external source of knowledge.
We propose a novel graph neural network architecture, called Dynamic Relevance Graph Network (DRGN)
DRGN operates on a given KG subgraph based on the question and answers entities and uses the relevance scores between the nodes to establish new edges.
arXiv Detail & Related papers (2022-09-20T18:52:05Z) - Graph-augmented Learning to Rank for Querying Large-scale Knowledge
Graph [34.774049199809426]
Knowledge graph question answering (i.e., KGQA) based on information retrieval aims to answer a question by retrieving answer from a large-scale knowledge graph.
We first propose to partition the retrieved KSG to several smaller sub-KSGs via a new subgraph partition algorithm.
We then present a graph-augmented learning to rank model to select the top-ranked sub-KSGs from them.
arXiv Detail & Related papers (2021-11-20T08:27:37Z) - QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering [122.84513233992422]
We propose a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs)
We show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning.
arXiv Detail & Related papers (2021-04-13T17:32:51Z) - Knowledge-Enhanced Personalized Review Generation with Capsule Graph
Neural Network [81.81662828017517]
We propose a knowledge-enhanced PRG model based on capsule graph neural network(Caps-GNN)
Our generation process contains two major steps, namely aspect sequence generation and sentence generation.
The incorporated knowledge graph is able to enhance user preference at both aspect and word levels.
arXiv Detail & Related papers (2020-10-04T03:54:40Z) - Semantic Graphs for Generating Deep Questions [98.5161888878238]
We propose a novel framework which first constructs a semantic-level graph for the input document and then encodes the semantic graph by introducing an attention-based GGNN (Att-GGNN)
On the HotpotQA deep-question centric dataset, our model greatly improves performance over questions requiring reasoning over multiple facts, leading to state-of-the-art performance.
arXiv Detail & Related papers (2020-04-27T10:52:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.