Knowledge-based Review Generation by Coherence Enhanced Text Planning
- URL: http://arxiv.org/abs/2105.03815v1
- Date: Sun, 9 May 2021 02:12:05 GMT
- Title: Knowledge-based Review Generation by Coherence Enhanced Text Planning
- Authors: Junyi Li, Wayne Xin Zhao, Zhicheng Wei, Nicholas Jing Yuan and Ji-Rong
Wen
- Abstract summary: We propose a novel Coherence Enhanced Text Planning model (CETP) based on knowledge graphs (KGs) to improve both global and local coherence for review generation.
For global coherence, we design a hierarchical self-attentive architecture with both subgraph- and node-level attention to enhance the correlations between subgraphs.
Experiments on three datasets confirm the effectiveness of our model on improving the content coherence of generated texts.
- Score: 45.473253542837995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a natural language generation task, it is challenging to generate
informative and coherent review text. In order to enhance the informativeness
of the generated text, existing solutions typically learn to copy entities or
triples from knowledge graphs (KGs). However, they lack overall consideration
to select and arrange the incorporated knowledge, which tends to cause text
incoherence.
To address the above issue, we focus on improving entity-centric coherence of
the generated reviews by leveraging the semantic structure of KGs. In this
paper, we propose a novel Coherence Enhanced Text Planning model (CETP) based
on knowledge graphs (KGs) to improve both global and local coherence for review
generation. The proposed model learns a two-level text plan for generating a
document: (1) the document plan is modeled as a sequence of sentence plans in
order, and (2) the sentence plan is modeled as an entity-based subgraph from
KG. Local coherence can be naturally enforced by KG subgraphs through
intra-sentence correlations between entities. For global coherence, we design a
hierarchical self-attentive architecture with both subgraph- and node-level
attention to enhance the correlations between subgraphs. To our knowledge, we
are the first to utilize a KG-based text planning model to enhance text
coherence for review generation. Extensive experiments on three datasets
confirm the effectiveness of our model on improving the content coherence of
generated texts.
Related papers
- Bridging Local Details and Global Context in Text-Attributed Graphs [62.522550655068336]
GraphBridge is a framework that bridges local and global perspectives by leveraging contextual textual information.
Our method achieves state-of-theart performance, while our graph-aware token reduction module significantly enhances efficiency and solves scalability issues.
arXiv Detail & Related papers (2024-06-18T13:35:25Z) - Modeling Unified Semantic Discourse Structure for High-quality Headline Generation [45.23071138765902]
We propose using a unified semantic discourse structure (S3) to represent document semantics.
The hierarchical composition of sentence, clause, and word intrinsically characterizes the semantic meaning of the overall document.
Our work can be instructive for a broad range of document modeling tasks, more than headline or summarization generation.
arXiv Detail & Related papers (2024-03-23T09:18:53Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Generating Faithful Text From a Knowledge Graph with Noisy Reference
Text [26.6775578332187]
We develop a KG-to-text generation model that can generate faithful natural-language text from a given graph.
Our framework incorporates two core ideas: Firstly, we utilize contrastive learning to enhance the model's ability to differentiate between faithful and hallucinated information in the text.
Secondly, we empower the decoder to control the level of hallucination in the generated text by employing a controllable text generation technique.
arXiv Detail & Related papers (2023-08-12T07:12:45Z) - Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation
in Few Shots [58.404516361586325]
Few-shot table-to-text generation is a task of composing fluent and faithful sentences to convey table content using limited data.
This paper proposes a novel approach, Memorize and Generate (called AMG), inspired by the text generation process of humans.
arXiv Detail & Related papers (2022-03-01T20:37:20Z) - JointGT: Graph-Text Joint Representation Learning for Text Generation
from Knowledge Graphs [44.06715423776722]
We propose a graph-text joint representation learning model called JointGT.
During encoding, we devise a structure-aware semantic aggregation module which is plugged into each Transformer layer.
We show that JointGT obtains new state-of-the-art performance on various KG-to-text datasets.
arXiv Detail & Related papers (2021-06-19T14:10:10Z) - Few-shot Knowledge Graph-to-Text Generation with Pretrained Language
Models [42.38563175680914]
This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG)
Considering the few-shot setting, we leverage the excellent capacities of pretrained language models (PLMs) in language understanding and generation.
arXiv Detail & Related papers (2021-06-03T06:48:00Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.