Recent Advances in Neural Text Generation: A Task-Agnostic Survey
- URL: http://arxiv.org/abs/2203.03047v3
- Date: Mon, 12 Jun 2023 09:48:11 GMT
- Title: Recent Advances in Neural Text Generation: A Task-Agnostic Survey
- Authors: Chen Tang, Frank Guerin and Chenghua Lin
- Abstract summary: This paper offers a comprehensive and task-agnostic survey of the recent advancements in neural text generation.
We categorize these advancements into four key areas: data construction, neural frameworks, training and inference strategies, and evaluation metrics.
We explore the future directions for the advancement of neural text generation, which encompass the utilization of neural pipelines and the incorporation of background knowledge.
- Score: 20.932460734129585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, considerable research has been dedicated to the application
of neural models in the field of natural language generation (NLG). The primary
objective is to generate text that is both linguistically natural and
human-like, while also exerting control over the generation process. This paper
offers a comprehensive and task-agnostic survey of the recent advancements in
neural text generation. These advancements have been facilitated through a
multitude of developments, which we categorize into four key areas: data
construction, neural frameworks, training and inference strategies, and
evaluation metrics. By examining these different aspects, we aim to provide a
holistic overview of the progress made in the field. Furthermore, we explore
the future directions for the advancement of neural text generation, which
encompass the utilization of neural pipelines and the incorporation of
background knowledge. These avenues present promising opportunities to further
enhance the capabilities of NLG systems. Overall, this survey serves to
consolidate the current state of the art in neural text generation and
highlights potential avenues for future research and development in this
dynamic field.
Related papers
- A Survey on Neural Question Generation: Methods, Applications, and Prospects [56.97451350691765]
The survey begins with an overview of NQG's background, encompassing the task's problem formulation.
It then methodically classifies NQG approaches into three predominant categories: structured NQG, unstructured NQG, and hybrid NQG.
The survey culminates with a forward-looking perspective on the trajectory of NQG, identifying emergent research trends and prospective developmental paths.
arXiv Detail & Related papers (2024-02-28T11:57:12Z) - Towards Data-and Knowledge-Driven Artificial Intelligence: A Survey on Neuro-Symbolic Computing [73.0977635031713]
Neural-symbolic computing (NeSy) has been an active research area of Artificial Intelligence (AI) for many years.
NeSy shows promise of reconciling the advantages of reasoning and interpretability of symbolic representation and robust learning in neural networks.
arXiv Detail & Related papers (2022-10-28T04:38:10Z) - Innovations in Neural Data-to-text Generation: A Survey [10.225452376884233]
This survey offers a consolidated view into the neural DTG paradigm with a structured examination of the approaches, benchmark datasets, and evaluation protocols.
We highlight promising avenues for DTG research that not only focus on the design of linguistically capable systems but also systems that exhibit fairness and accountability.
arXiv Detail & Related papers (2022-07-25T23:21:48Z) - Survey of Hallucination in Natural Language Generation [69.9926849848132]
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies.
Deep learning based generation is prone to hallucinate unintended text, which degrades the system performance.
This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
arXiv Detail & Related papers (2022-02-08T03:55:01Z) - A Survey of Natural Language Generation [30.134226859027642]
This paper offers a comprehensive review of the research on Natural Language Generation (NLG) over the past two decades.
It focuses on data-to-text generation and text-to-text generation deep learning methods, as well as new applications of NLG technology.
arXiv Detail & Related papers (2021-12-22T09:08:00Z) - Deep Learning for Text Style Transfer: A Survey [71.8870854396927]
Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text.
We present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017.
We discuss the task formulation, existing datasets and subtasks, evaluation, as well as the rich methodologies in the presence of parallel and non-parallel data.
arXiv Detail & Related papers (2020-11-01T04:04:43Z) - Positioning yourself in the maze of Neural Text Generation: A
Task-Agnostic Survey [54.34370423151014]
This paper surveys the components of modeling approaches relaying task impacts across various generation tasks such as storytelling, summarization, translation etc.
We present an abstraction of the imperative techniques with respect to learning paradigms, pretraining, modeling approaches, decoding and the key challenges outstanding in the field in each of them.
arXiv Detail & Related papers (2020-10-14T17:54:42Z) - A Survey of Knowledge-Enhanced Text Generation [81.24633231919137]
The goal of text generation is to make machines express in human language.
Various neural encoder-decoder models have been proposed to achieve the goal by learning to map input text to output text.
To address this issue, researchers have considered incorporating various forms of knowledge beyond the input text into the generation models.
arXiv Detail & Related papers (2020-10-09T06:46:46Z) - Neural Language Generation: Formulation, Methods, and Evaluation [13.62873478165553]
Recent advances in neural network-based generative modeling have reignited the hopes in having computer systems capable of seamlessly conversing with humans.
High capacity deep learning models trained on large scale datasets demonstrate unparalleled abilities to learn patterns in the data even in the lack of explicit supervision signals.
There is no standard way to assess the quality of text produced by these generative models, which constitutes a serious bottleneck towards the progress of the field.
arXiv Detail & Related papers (2020-07-31T00:08:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.