A Survey of Knowledge-Enhanced Text Generation
- URL: http://arxiv.org/abs/2010.04389v4
- Date: Sat, 22 Jan 2022 05:27:38 GMT
- Title: A Survey of Knowledge-Enhanced Text Generation
- Authors: Wenhao Yu, Chenguang Zhu, Zaitang Li, Zhiting Hu, Qingyun Wang, Heng
Ji, Meng Jiang
- Abstract summary: The goal of text generation is to make machines express in human language.
Various neural encoder-decoder models have been proposed to achieve the goal by learning to map input text to output text.
To address this issue, researchers have considered incorporating various forms of knowledge beyond the input text into the generation models.
- Score: 81.24633231919137
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of text generation is to make machines express in human language. It
is one of the most important yet challenging tasks in natural language
processing (NLP). Since 2014, various neural encoder-decoder models pioneered
by Seq2Seq have been proposed to achieve the goal by learning to map input text
to output text. However, the input text alone often provides limited knowledge
to generate the desired output, so the performance of text generation is still
far from satisfaction in many real-world scenarios. To address this issue,
researchers have considered incorporating various forms of knowledge beyond the
input text into the generation models. This research direction is known as
knowledge-enhanced text generation. In this survey, we present a comprehensive
review of the research on knowledge enhanced text generation over the past five
years. The main content includes two parts: (i) general methods and
architectures for integrating knowledge into text generation; (ii) specific
techniques and applications according to different forms of knowledge data.
This survey can have broad audiences, researchers and practitioners, in
academia and industry.
Related papers
- Automatic and Human-AI Interactive Text Generation [27.05024520190722]
This tutorial aims to provide an overview of the state-of-the-art natural language generation research.
Text-to-text generation tasks are more constrained in terms of semantic consistency and targeted language styles.
arXiv Detail & Related papers (2023-10-05T20:26:15Z) - TegTok: Augmenting Text Generation via Task-specific and Open-world
Knowledge [83.55215993730326]
We propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework.
Our model selects knowledge entries from two types of knowledge sources through dense retrieval and then injects them into the input encoding and output decoding stages respectively.
arXiv Detail & Related papers (2022-03-16T10:37:59Z) - A Survey on Retrieval-Augmented Text Generation [53.04991859796971]
Retrieval-augmented text generation has remarkable advantages and has achieved state-of-the-art performance in many NLP tasks.
It firstly highlights the generic paradigm of retrieval-augmented generation, and then it reviews notable approaches according to different tasks.
arXiv Detail & Related papers (2022-02-02T16:18:41Z) - A Survey of Pretrained Language Models Based Text Generation [97.64625999380425]
Text Generation aims to produce plausible and readable text in human language from input data.
Deep learning has greatly advanced this field by neural generation models, especially the paradigm of pretrained language models (PLMs)
Grounding text generation on PLMs is seen as a promising direction in both academia and industry.
arXiv Detail & Related papers (2022-01-14T01:44:58Z) - Pretrained Language Models for Text Generation: A Survey [46.03096493973206]
We present an overview of the major advances achieved in the topic of pretrained language models (PLMs) for text generation.
We discuss how to adapt existing PLMs to model different input data and satisfy special properties in the generated text.
arXiv Detail & Related papers (2021-05-21T12:27:44Z) - Positioning yourself in the maze of Neural Text Generation: A
Task-Agnostic Survey [54.34370423151014]
This paper surveys the components of modeling approaches relaying task impacts across various generation tasks such as storytelling, summarization, translation etc.
We present an abstraction of the imperative techniques with respect to learning paradigms, pretraining, modeling approaches, decoding and the key challenges outstanding in the field in each of them.
arXiv Detail & Related papers (2020-10-14T17:54:42Z) - Text Recognition in the Wild: A Survey [33.22076515689926]
This literature review attempts to present the entire picture of the field of scene text recognition.
It provides a comprehensive reference for people entering this field, and could be helpful to inspire future research.
arXiv Detail & Related papers (2020-05-07T13:57:04Z) - Towards information-rich, logical text generation with
knowledge-enhanced neural models [15.931791215286879]
Text generation system has made massive promising progress contributed by deep learning techniques and has been widely applied in our life.
Existing end-to-end neural models suffer from the problem of tending to generate uninformative and generic text because they cannot ground input context with background knowledge.
This survey gives a comprehensive review of knowledge-enhanced text generation systems, summarizes research progress to solving these challenges and proposes some open issues and research directions.
arXiv Detail & Related papers (2020-03-02T12:41:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.