Graph-based Multi-hop Reasoning for Long Text Generation
- URL: http://arxiv.org/abs/2009.13282v1
- Date: Mon, 28 Sep 2020 12:47:59 GMT
- Title: Graph-based Multi-hop Reasoning for Long Text Generation
- Authors: Liang Zhao, Jingjing Xu, Junyang Lin, Yichang Zhang, Hongxia Yang, Xu
Sun
- Abstract summary: MRG consists of twoparts, a graph-based multi-hop reasoning module and a path-aware sentence realization module.
Unlike previous black-box models, MRG explicitly infers the skeleton path, which provides explanatory views tounderstand how the proposed model works.
- Score: 66.64743847850666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Long text generation is an important but challenging task.The main problem
lies in learning sentence-level semantic dependencies which traditional
generative models often suffer from. To address this problem, we propose a
Multi-hop Reasoning Generation (MRG) approach that incorporates multi-hop
reasoning over a knowledge graph to learn semantic dependencies among
sentences. MRG consists of twoparts, a graph-based multi-hop reasoning module
and a path-aware sentence realization module. The reasoning module is
responsible for searching skeleton paths from a knowledge graph to imitate the
imagination process in the human writing for semantic transfer. Based on the
inferred paths, the sentence realization module then generates a complete
sentence. Unlike previous black-box models, MRG explicitly infers the skeleton
path, which provides explanatory views tounderstand how the proposed model
works. We conduct experiments on three representative tasks, including story
generation, review generation, and product description generation. Automatic
and manual evaluation show that our proposed method can generate more
informative and coherentlong text than strong baselines, such as pre-trained
models(e.g. GPT-2) and knowledge-enhanced models.
Related papers
- TAGExplainer: Narrating Graph Explanations for Text-Attributed Graph Learning Models [14.367754016281934]
This paper presents TAGExplainer, the first method designed to generate natural language explanations for TAG learning.
To address the lack of annotated ground truth explanations in real-world scenarios, we propose first generating pseudo-labels that capture the model's decisions from saliency-based explanations.
The high-quality pseudo-labels are finally utilized to train an end-to-end explanation generator model.
arXiv Detail & Related papers (2024-10-20T03:55:46Z) - Integrating Large Language Models with Graph-based Reasoning for Conversational Question Answering [58.17090503446995]
We focus on a conversational question answering task which combines the challenges of understanding questions in context and reasoning over evidence gathered from heterogeneous sources like text, knowledge graphs, tables, and infoboxes.
Our method utilizes a graph structured representation to aggregate information about a question and its context.
arXiv Detail & Related papers (2024-06-14T13:28:03Z) - MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text
Generation [102.20036684996248]
We propose MURMUR, a neuro-symbolic modular approach to text generation from semi-structured data with multi-step reasoning.
We conduct experiments on two data-to-text generation tasks like WebNLG and LogicNLG.
arXiv Detail & Related papers (2022-12-16T17:36:23Z) - Reasoning Circuits: Few-shot Multihop Question Generation with
Structured Rationales [11.068901022944015]
Chain-of-thought rationale generation has been shown to improve performance on multi-step reasoning tasks.
We introduce a new framework for applying chain-of-thought inspired structured rationale generation to multi-hop question generation under a very low supervision regime.
arXiv Detail & Related papers (2022-11-15T19:36:06Z) - TopNet: Learning from Neural Topic Model to Generate Long Stories [43.5564336855688]
Long story generation (LSG) is one of the coveted goals in natural language processing.
We propose emphTopNet to obtain high-quality skeleton words to complement the short input.
Our proposed framework is highly effective in skeleton word selection and significantly outperforms state-of-the-art models in both automatic evaluation and human evaluation.
arXiv Detail & Related papers (2021-12-14T09:47:53Z) - Language Generation with Multi-Hop Reasoning on Commonsense Knowledge
Graph [124.45799297285083]
We argue that exploiting both the structural and semantic information of the knowledge graph facilitates commonsense-aware text generation.
We propose Generation with Multi-Hop Reasoning Flow (GRF) that enables pre-trained models with dynamic multi-hop reasoning on multi-relational paths extracted from the external commonsense knowledge graph.
arXiv Detail & Related papers (2020-09-24T13:55:32Z) - Text Modular Networks: Learning to Decompose Tasks in the Language of
Existing Models [61.480085460269514]
We propose a framework for building interpretable systems that learn to solve complex tasks by decomposing them into simpler ones solvable by existing models.
We use this framework to build ModularQA, a system that can answer multi-hop reasoning questions by decomposing them into sub-questions answerable by a neural factoid single-span QA model and a symbolic calculator.
arXiv Detail & Related papers (2020-09-01T23:45:42Z) - Learning to Discretely Compose Reasoning Module Networks for Video
Captioning [81.81394228898591]
We propose a novel visual reasoning approach for video captioning, named Reasoning Module Networks (RMN)
RMN employs three sophisticated RM-temporal reasoning, and 2) a dynamic and discrete module selector trained by a linguistic loss with a Gumbel approximation.
arXiv Detail & Related papers (2020-07-17T15:27:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.