Language Generation with Multi-Hop Reasoning on Commonsense Knowledge
Graph
- URL: http://arxiv.org/abs/2009.11692v1
- Date: Thu, 24 Sep 2020 13:55:32 GMT
- Title: Language Generation with Multi-Hop Reasoning on Commonsense Knowledge
Graph
- Authors: Haozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, Xiaoyan Zhu, Minlie Huang
- Abstract summary: We argue that exploiting both the structural and semantic information of the knowledge graph facilitates commonsense-aware text generation.
We propose Generation with Multi-Hop Reasoning Flow (GRF) that enables pre-trained models with dynamic multi-hop reasoning on multi-relational paths extracted from the external commonsense knowledge graph.
- Score: 124.45799297285083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the success of generative pre-trained language models on a series of
text generation tasks, they still suffer in cases where reasoning over
underlying commonsense knowledge is required during generation. Existing
approaches that integrate commonsense knowledge into generative pre-trained
language models simply transfer relational knowledge by post-training on
individual knowledge triples while ignoring rich connections within the
knowledge graph. We argue that exploiting both the structural and semantic
information of the knowledge graph facilitates commonsense-aware text
generation. In this paper, we propose Generation with Multi-Hop Reasoning Flow
(GRF) that enables pre-trained models with dynamic multi-hop reasoning on
multi-relational paths extracted from the external commonsense knowledge graph.
We empirically show that our model outperforms existing baselines on three text
generation tasks that require reasoning over commonsense knowledge. We also
demonstrate the effectiveness of the dynamic multi-hop reasoning module with
reasoning paths inferred by the model that provide rationale to the generation.
Related papers
- Commonsense Knowledge Transfer for Pre-trained Language Models [83.01121484432801]
We introduce commonsense knowledge transfer, a framework to transfer the commonsense knowledge stored in a neural commonsense knowledge model to a general-purpose pre-trained language model.
It first exploits general texts to form queries for extracting commonsense knowledge from the neural commonsense knowledge model.
It then refines the language model with two self-supervised objectives: commonsense mask infilling and commonsense relation prediction.
arXiv Detail & Related papers (2023-06-04T15:44:51Z) - Building Knowledge-Grounded Dialogue Systems with Graph-Based Semantic Modeling [43.0554223015728]
The knowledge-grounded dialogue task aims to generate responses that convey information from given knowledge documents.
We propose a novel graph structure, Grounded Graph, that models the semantic structure of both dialogue and knowledge.
We also propose a Grounded Graph Aware Transformer to enhance knowledge-grounded response generation.
arXiv Detail & Related papers (2022-04-27T03:31:46Z) - TegTok: Augmenting Text Generation via Task-specific and Open-world
Knowledge [83.55215993730326]
We propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework.
Our model selects knowledge entries from two types of knowledge sources through dense retrieval and then injects them into the input encoding and output decoding stages respectively.
arXiv Detail & Related papers (2022-03-16T10:37:59Z) - Generated Knowledge Prompting for Commonsense Reasoning [53.88983683513114]
We propose generating knowledge statements directly from a language model with a generic prompt format.
This approach improves performance of both off-the-shelf and finetuned language models on four commonsense reasoning tasks.
Notably, we find that a model's predictions can improve when using its own generated knowledge.
arXiv Detail & Related papers (2021-10-15T21:58:03Z) - KG-BART: Knowledge Graph-Augmented BART for Generative Commonsense
Reasoning [78.81080813406177]
We propose a novel knowledge graph augmented pre-trained language generation model KG-BART.
KG-BART encompasses the complex relations of concepts through the knowledge graph and produces more logical and natural sentences as output.
arXiv Detail & Related papers (2020-09-26T19:57:49Z) - Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question
Answering [35.40919477319811]
We propose a novel knowledge-aware approach that equips pre-trained language models with a multi-hop relational reasoning module.
It performs multi-hop, multi-relational reasoning over subgraphs extracted from external knowledge graphs.
It unifies path-based reasoning methods and graph neural networks to achieve better interpretability and scalability.
arXiv Detail & Related papers (2020-05-01T23:10:26Z) - A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation [98.25464306634758]
We propose to utilize commonsense knowledge from external knowledge bases to generate reasonable stories.
We employ multi-task learning which combines a discriminative objective to distinguish true and fake stories.
Our model can generate more reasonable stories than state-of-the-art baselines, particularly in terms of logic and global coherence.
arXiv Detail & Related papers (2020-01-15T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.