Neural Language Generation: Formulation, Methods, and Evaluation
- URL: http://arxiv.org/abs/2007.15780v1
- Date: Fri, 31 Jul 2020 00:08:28 GMT
- Title: Neural Language Generation: Formulation, Methods, and Evaluation
- Authors: Cristina Garbacea, Qiaozhu Mei
- Abstract summary: Recent advances in neural network-based generative modeling have reignited the hopes in having computer systems capable of seamlessly conversing with humans.
High capacity deep learning models trained on large scale datasets demonstrate unparalleled abilities to learn patterns in the data even in the lack of explicit supervision signals.
There is no standard way to assess the quality of text produced by these generative models, which constitutes a serious bottleneck towards the progress of the field.
- Score: 13.62873478165553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in neural network-based generative modeling have reignited
the hopes in having computer systems capable of seamlessly conversing with
humans and able to understand natural language. Neural architectures have been
employed to generate text excerpts to various degrees of success, in a
multitude of contexts and tasks that fulfil various user needs. Notably, high
capacity deep learning models trained on large scale datasets demonstrate
unparalleled abilities to learn patterns in the data even in the lack of
explicit supervision signals, opening up a plethora of new possibilities
regarding producing realistic and coherent texts. While the field of natural
language generation is evolving rapidly, there are still many open challenges
to address. In this survey we formally define and categorize the problem of
natural language generation. We review particular application tasks that are
instantiations of these general formulations, in which generating natural
language is of practical importance. Next we include a comprehensive outline of
methods and neural architectures employed for generating diverse texts.
Nevertheless, there is no standard way to assess the quality of text produced
by these generative models, which constitutes a serious bottleneck towards the
progress of the field. To this end, we also review current approaches to
evaluating natural language generation systems. We hope this survey will
provide an informative overview of formulations, methods, and assessments of
neural natural language generation.
Related papers
- Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Controllable Text Generation for Open-Domain Creativity and Fairness [36.744208990024575]
I introduce our recent works on controllable text generation to enhance the creativity and fairness of language generation models.
We explore hierarchical generation and constrained decoding, with applications to creative language generation including story, poetry, and figurative languages.
arXiv Detail & Related papers (2022-09-24T22:40:01Z) - Why is constrained neural language generation particularly challenging? [13.62873478165553]
We present an extensive survey on the emerging topic of constrained neural language generation.
We distinguish between conditions and constraints, present constrained text generation tasks, and review existing methods and evaluation metrics for constrained text generation.
Our aim is to highlight recent progress and trends in this emerging field, informing on the most promising directions and limitations towards advancing the state-of-the-art of constrained neural language generation research.
arXiv Detail & Related papers (2022-06-11T02:07:33Z) - Recent Advances in Neural Text Generation: A Task-Agnostic Survey [20.932460734129585]
This paper offers a comprehensive and task-agnostic survey of the recent advancements in neural text generation.
We categorize these advancements into four key areas: data construction, neural frameworks, training and inference strategies, and evaluation metrics.
We explore the future directions for the advancement of neural text generation, which encompass the utilization of neural pipelines and the incorporation of background knowledge.
arXiv Detail & Related papers (2022-03-06T20:47:49Z) - Deep Latent-Variable Models for Text Generation [7.119436003155924]
Deep neural network-based end-to-end architectures have been widely adopted.
End-to-end approach conflates all sub-modules, which used to be designed by complex handcrafted rules, into a holistic encode-decode architecture.
This dissertation presents how deep latent-variable models can improve over the standard encoder-decoder model for text generation.
arXiv Detail & Related papers (2022-03-03T23:06:39Z) - Typical Decoding for Natural Language Generation [76.69397802617064]
We study why high-probability texts can be dull or repetitive.
We show that typical sampling offers competitive performance in terms of quality.
arXiv Detail & Related papers (2022-02-01T18:58:45Z) - Towards Zero-shot Language Modeling [90.80124496312274]
We construct a neural model that is inductively biased towards learning human languages.
We infer this distribution from a sample of typologically diverse training languages.
We harness additional language-specific side information as distant supervision for held-out languages.
arXiv Detail & Related papers (2021-08-06T23:49:18Z) - Deep Learning for Text Style Transfer: A Survey [71.8870854396927]
Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text.
We present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017.
We discuss the task formulation, existing datasets and subtasks, evaluation, as well as the rich methodologies in the presence of parallel and non-parallel data.
arXiv Detail & Related papers (2020-11-01T04:04:43Z) - A Survey on Recent Approaches for Natural Language Processing in
Low-Resource Scenarios [30.391291221959545]
Deep neural networks and huge language models are becoming omnipresent in natural language applications.
As they are known for requiring large amounts of training data, there is a growing body of work to improve the performance in low-resource settings.
Motivated by the recent fundamental changes towards neural models and the popular pre-train and fine-tune paradigm, we survey promising approaches for low-resource natural language processing.
arXiv Detail & Related papers (2020-10-23T11:22:01Z) - Positioning yourself in the maze of Neural Text Generation: A
Task-Agnostic Survey [54.34370423151014]
This paper surveys the components of modeling approaches relaying task impacts across various generation tasks such as storytelling, summarization, translation etc.
We present an abstraction of the imperative techniques with respect to learning paradigms, pretraining, modeling approaches, decoding and the key challenges outstanding in the field in each of them.
arXiv Detail & Related papers (2020-10-14T17:54:42Z) - A Survey of Knowledge-Enhanced Text Generation [81.24633231919137]
The goal of text generation is to make machines express in human language.
Various neural encoder-decoder models have been proposed to achieve the goal by learning to map input text to output text.
To address this issue, researchers have considered incorporating various forms of knowledge beyond the input text into the generation models.
arXiv Detail & Related papers (2020-10-09T06:46:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.