Informative Text Generation from Knowledge Triples
- URL: http://arxiv.org/abs/2209.12733v1
- Date: Mon, 26 Sep 2022 14:35:57 GMT
- Title: Informative Text Generation from Knowledge Triples
- Authors: Zihao Fu, Yijiang River Dong, Lidong Bing, Wai Lam
- Abstract summary: We propose a novel memory augmented generator that employs a memory network to memorize the useful knowledge learned during the training.
We derive a dataset from WebNLG for our new setting and conduct extensive experiments to investigate the effectiveness of our model.
- Score: 56.939571343797304
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the development of the encoder-decoder architecture, researchers are able
to study the text generation tasks with broader types of data. Among them,
KB-to-text aims at converting a set of knowledge triples into human readable
sentences. In the original setting, the task assumes that the input triples and
the text are exactly aligned in the perspective of the embodied
knowledge/information. In this paper, we extend this setting and explore how to
facilitate the trained model to generate more informative text, namely,
containing more information about the triple entities but not conveyed by the
input triples. To solve this problem, we propose a novel memory augmented
generator that employs a memory network to memorize the useful knowledge
learned during the training and utilizes such information together with the
input triples to generate text in the operational or testing phase. We derive a
dataset from WebNLG for our new setting and conduct extensive experiments to
investigate the effectiveness of our model as well as uncover the intrinsic
characteristics of the setting.
Related papers
- Stylized Data-to-Text Generation: A Case Study in the E-Commerce Domain [53.22419717434372]
We propose a new task, namely stylized data-to-text generation, whose aim is to generate coherent text according to a specific style.
This task is non-trivial, due to three challenges: the logic of the generated text, unstructured style reference, and biased training samples.
We propose a novel stylized data-to-text generation model, named StyleD2T, comprising three components: logic planning-enhanced data embedding, mask-based style embedding, and unbiased stylized text generation.
arXiv Detail & Related papers (2023-05-05T03:02:41Z) - Few-Shot Table-to-Text Generation with Prompt Planning and Knowledge
Memorization [41.20314472839442]
We suggest a new framework: PromptMize, which targets table-to-text generation under few-shot settings.
The design of our framework consists of two aspects: a prompt planner and a knowledge adapter.
Our model achieves remarkable performance in generating quality as judged by human and automatic evaluations.
arXiv Detail & Related papers (2023-02-09T03:04:11Z) - TegTok: Augmenting Text Generation via Task-specific and Open-world
Knowledge [83.55215993730326]
We propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework.
Our model selects knowledge entries from two types of knowledge sources through dense retrieval and then injects them into the input encoding and output decoding stages respectively.
arXiv Detail & Related papers (2022-03-16T10:37:59Z) - Attention Is Indeed All You Need: Semantically Attention-Guided Decoding
for Data-to-Text NLG [0.913755431537592]
We propose a novel decoding method that extracts interpretable information from encoder-decoder models' cross-attention.
We show on three datasets its ability to dramatically reduce semantic errors in the generated outputs.
arXiv Detail & Related papers (2021-09-15T01:42:51Z) - Constructing Flow Graphs from Procedural Cybersecurity Texts [16.09313316086535]
We build a large annotated procedural text dataset (CTFW) in the cybersecurity domain (3154 documents)
We propose to identify relevant information from such texts and generate information flows between sentences.
Our experiments show that Graph Convolution Network with BERT sentence embeddings outperforms BERT in all three domains.
arXiv Detail & Related papers (2021-05-29T19:06:35Z) - A Survey of Knowledge-Enhanced Text Generation [81.24633231919137]
The goal of text generation is to make machines express in human language.
Various neural encoder-decoder models have been proposed to achieve the goal by learning to map input text to output text.
To address this issue, researchers have considered incorporating various forms of knowledge beyond the input text into the generation models.
arXiv Detail & Related papers (2020-10-09T06:46:46Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.