Positioning yourself in the maze of Neural Text Generation: A
Task-Agnostic Survey
- URL: http://arxiv.org/abs/2010.07279v2
- Date: Fri, 26 Mar 2021 00:31:52 GMT
- Title: Positioning yourself in the maze of Neural Text Generation: A
Task-Agnostic Survey
- Authors: Khyathi Raghavi Chandu and Alan W Black
- Abstract summary: This paper surveys the components of modeling approaches relaying task impacts across various generation tasks such as storytelling, summarization, translation etc.
We present an abstraction of the imperative techniques with respect to learning paradigms, pretraining, modeling approaches, decoding and the key challenges outstanding in the field in each of them.
- Score: 54.34370423151014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural text generation metamorphosed into several critical natural language
applications ranging from text completion to free form narrative generation. In
order to progress research in text generation, it is critical to absorb the
existing research works and position ourselves in this massively growing field.
Specifically, this paper surveys the fundamental components of modeling
approaches relaying task agnostic impacts across various generation tasks such
as storytelling, summarization, translation etc., In this context, we present
an abstraction of the imperative techniques with respect to learning paradigms,
pretraining, modeling approaches, decoding and the key challenges outstanding
in the field in each of them. Thereby, we deliver a one-stop destination for
researchers in the field to facilitate a perspective on where to situate their
work and how it impacts other closely related generation tasks.
Related papers
- Recent Trends in Linear Text Segmentation: a Survey [10.740243165055743]
The field of Natural Language Processing has recently seen a lot of interest as a result of the surge of text, video, and audio available on the web.
We provide an extensive overview of current advances in linear text segmentation, describing the state of the art in terms of resources and approaches for the task.
arXiv Detail & Related papers (2024-11-25T17:48:59Z) - Visual Text Meets Low-level Vision: A Comprehensive Survey on Visual
Text Processing [4.057550183467041]
The field of visual text processing has experienced a surge in research, driven by the advent of fundamental generative models.
We present a comprehensive, multi-perspective analysis of recent advancements in this field.
arXiv Detail & Related papers (2024-02-05T15:13:20Z) - Automatic and Human-AI Interactive Text Generation [27.05024520190722]
This tutorial aims to provide an overview of the state-of-the-art natural language generation research.
Text-to-text generation tasks are more constrained in terms of semantic consistency and targeted language styles.
arXiv Detail & Related papers (2023-10-05T20:26:15Z) - A survey on text generation using generative adversarial networks [0.0]
This work presents a thorough review concerning recent studies and text generation advancements using Generative Adversarial Networks.
The usage of adversarial learning for text generation is promising as it provides alternatives to generate the so-called "natural" language.
arXiv Detail & Related papers (2022-12-20T17:54:08Z) - An Inclusive Notion of Text [69.36678873492373]
We argue that clarity on the notion of text is crucial for reproducible and generalizable NLP.
We introduce a two-tier taxonomy of linguistic and non-linguistic elements that are available in textual sources and can be used in NLP modeling.
arXiv Detail & Related papers (2022-11-10T14:26:43Z) - A Survey on Retrieval-Augmented Text Generation [53.04991859796971]
Retrieval-augmented text generation has remarkable advantages and has achieved state-of-the-art performance in many NLP tasks.
It firstly highlights the generic paradigm of retrieval-augmented generation, and then it reviews notable approaches according to different tasks.
arXiv Detail & Related papers (2022-02-02T16:18:41Z) - Pretrained Language Models for Text Generation: A Survey [46.03096493973206]
We present an overview of the major advances achieved in the topic of pretrained language models (PLMs) for text generation.
We discuss how to adapt existing PLMs to model different input data and satisfy special properties in the generated text.
arXiv Detail & Related papers (2021-05-21T12:27:44Z) - Deep Learning for Text Style Transfer: A Survey [71.8870854396927]
Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text.
We present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017.
We discuss the task formulation, existing datasets and subtasks, evaluation, as well as the rich methodologies in the presence of parallel and non-parallel data.
arXiv Detail & Related papers (2020-11-01T04:04:43Z) - Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer [64.22926988297685]
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP)
In this paper, we explore the landscape of introducing transfer learning techniques for NLP by a unified framework that converts all text-based language problems into a text-to-text format.
arXiv Detail & Related papers (2019-10-23T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.