A Survey of Natural Language Generation
- URL: http://arxiv.org/abs/2112.11739v1
- Date: Wed, 22 Dec 2021 09:08:00 GMT
- Title: A Survey of Natural Language Generation
- Authors: Chenhe Dong, Yinghui Li, Haifan Gong, Miaoxin Chen, Junxin Li, Ying
Shen, Min Yang
- Abstract summary: This paper offers a comprehensive review of the research on Natural Language Generation (NLG) over the past two decades.
It focuses on data-to-text generation and text-to-text generation deep learning methods, as well as new applications of NLG technology.
- Score: 30.134226859027642
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper offers a comprehensive review of the research on Natural Language
Generation (NLG) over the past two decades, especially in relation to
data-to-text generation and text-to-text generation deep learning methods, as
well as new applications of NLG technology. This survey aims to (a) give the
latest synthesis of deep learning research on the NLG core tasks, as well as
the architectures adopted in the field; (b) detail meticulously and
comprehensively various NLG tasks and datasets, and draw attention to the
challenges in NLG evaluation, focusing on different evaluation methods and
their relationships; (c) highlight some future emphasis and relatively recent
research issues that arise due to the increasing synergy between NLG and other
artificial intelligence areas, such as computer vision, text and computational
creativity.
Related papers
- Natural Language Generation for Visualizations: State of the Art, Challenges and Future Directions [7.064953237013352]
We focus on the research works that focus on text generation for visualizations.
To characterize the NLG problem and the design space of proposed solutions, we pose five Wh-questions.
We categorize the solutions used in the surveyed papers based on these "five Wh-questions"
arXiv Detail & Related papers (2024-09-29T15:53:18Z) - Leveraging Large Language Models for NLG Evaluation: Advances and Challenges [57.88520765782177]
Large Language Models (LLMs) have opened new avenues for assessing generated content quality, e.g., coherence, creativity, and context relevance.
We propose a coherent taxonomy for organizing existing LLM-based evaluation metrics, offering a structured framework to understand and compare these methods.
By discussing unresolved challenges, including bias, robustness, domain-specificity, and unified evaluation, this paper seeks to offer insights to researchers and advocate for fairer and more advanced NLG evaluation techniques.
arXiv Detail & Related papers (2024-01-13T15:59:09Z) - A Comprehensive Survey of Natural Language Generation Advances from the
Perspective of Digital Deception [1.557442325082254]
We provide an overview of the field of natural language generators (NLG)
We outline a proposed high-level taxonomy of the central concepts that constitute NLG.
We discuss the broader challenges of NLG, including the risks of bias that are often exhibited by existing text generation systems.
arXiv Detail & Related papers (2022-08-11T11:27:38Z) - Innovations in Neural Data-to-text Generation: A Survey [10.225452376884233]
This survey offers a consolidated view into the neural DTG paradigm with a structured examination of the approaches, benchmark datasets, and evaluation protocols.
We highlight promising avenues for DTG research that not only focus on the design of linguistically capable systems but also systems that exhibit fairness and accountability.
arXiv Detail & Related papers (2022-07-25T23:21:48Z) - Faithfulness in Natural Language Generation: A Systematic Survey of
Analysis, Evaluation and Optimization Methods [48.47413103662829]
Natural Language Generation (NLG) has made great progress in recent years due to the development of deep learning techniques such as pre-trained language models.
However, the faithfulness problem that the generated text usually contains unfaithful or non-factual information has become the biggest challenge.
arXiv Detail & Related papers (2022-03-10T08:28:32Z) - Recent Advances in Neural Text Generation: A Task-Agnostic Survey [20.932460734129585]
This paper offers a comprehensive and task-agnostic survey of the recent advancements in neural text generation.
We categorize these advancements into four key areas: data construction, neural frameworks, training and inference strategies, and evaluation metrics.
We explore the future directions for the advancement of neural text generation, which encompass the utilization of neural pipelines and the incorporation of background knowledge.
arXiv Detail & Related papers (2022-03-06T20:47:49Z) - A Survey on Retrieval-Augmented Text Generation [53.04991859796971]
Retrieval-augmented text generation has remarkable advantages and has achieved state-of-the-art performance in many NLP tasks.
It firstly highlights the generic paradigm of retrieval-augmented generation, and then it reviews notable approaches according to different tasks.
arXiv Detail & Related papers (2022-02-02T16:18:41Z) - Positioning yourself in the maze of Neural Text Generation: A
Task-Agnostic Survey [54.34370423151014]
This paper surveys the components of modeling approaches relaying task impacts across various generation tasks such as storytelling, summarization, translation etc.
We present an abstraction of the imperative techniques with respect to learning paradigms, pretraining, modeling approaches, decoding and the key challenges outstanding in the field in each of them.
arXiv Detail & Related papers (2020-10-14T17:54:42Z) - A Survey of Knowledge-Enhanced Text Generation [81.24633231919137]
The goal of text generation is to make machines express in human language.
Various neural encoder-decoder models have been proposed to achieve the goal by learning to map input text to output text.
To address this issue, researchers have considered incorporating various forms of knowledge beyond the input text into the generation models.
arXiv Detail & Related papers (2020-10-09T06:46:46Z) - Evaluation of Text Generation: A Survey [107.62760642328455]
The paper surveys evaluation methods of natural language generation systems that have been developed in the last few years.
We group NLG evaluation methods into three categories: (1) human-centric evaluation metrics, (2) automatic metrics that require no training, and (3) machine-learned metrics.
arXiv Detail & Related papers (2020-06-26T04:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.