DYPLOC: Dynamic Planning of Content Using Mixed Language Models for Text
Generation
- URL: http://arxiv.org/abs/2106.00791v1
- Date: Tue, 1 Jun 2021 20:56:10 GMT
- Title: DYPLOC: Dynamic Planning of Content Using Mixed Language Models for Text
Generation
- Authors: Xinyu Hua, Ashwin Sreevatsa, and Lu Wang
- Abstract summary: We study the task of long-form opinion text generation, which faces at least two distinct challenges.
Existing neural generation models fall short of coherence, thus requiring efficient content planning.
We propose DYPLOC, a generation framework that conducts dynamic planning of content while generating the output based on a novel design of mixed language models.
- Score: 10.477090501569284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the task of long-form opinion text generation, which faces at least
two distinct challenges. First, existing neural generation models fall short of
coherence, thus requiring efficient content planning. Second, diverse types of
information are needed to guide the generator to cover both subjective and
objective content. To this end, we propose DYPLOC, a generation framework that
conducts dynamic planning of content while generating the output based on a
novel design of mixed language models. To enrich the generation with diverse
content, we further propose to use large pre-trained models to predict relevant
concepts and to generate claims. We experiment with two challenging tasks on
newly collected datasets: (1) argument generation with Reddit ChangeMyView, and
(2) writing articles using New York Times' Opinion section. Automatic
evaluation shows that our model significantly outperforms competitive
comparisons. Human judges further confirm that our generations are more
coherent with richer content.
Related papers
- Tell2Design: A Dataset for Language-Guided Floor Plan Generation [21.686370988228614]
We consider the task of generating designs directly from natural language descriptions.
Designs must satisfy different constraints that are not present in generating artistic images.
arXiv Detail & Related papers (2023-11-27T15:49:29Z) - MOCHA: A Multi-Task Training Approach for Coherent Text Generation from
Cognitive Perspective [22.69509556890676]
We propose a novel multi-task training strategy for coherent text generation grounded on the cognitive theory of writing.
We extensively evaluate our model on three open-ended generation tasks including story generation, news article writing and argument generation.
arXiv Detail & Related papers (2022-10-26T11:55:41Z) - On Advances in Text Generation from Images Beyond Captioning: A Case
Study in Self-Rationalization [89.94078728495423]
We show that recent advances in each modality, CLIP image representations and scaling of language models, do not consistently improve multimodal self-rationalization of tasks with multimodal inputs.
Our findings call for a backbone modelling approach that can be built on to advance text generation from images and text beyond image captioning.
arXiv Detail & Related papers (2022-05-24T00:52:40Z) - PLANET: Dynamic Content Planning in Autoregressive Transformers for
Long-form Text Generation [47.97523895218194]
We propose a novel generation framework leveraging autoregressive self-attention mechanism to conduct content planning and surface realization dynamically.
Our framework enriches the Transformer decoder with latent representations to maintain sentence-level semantic plans grounded by bag-of-words.
arXiv Detail & Related papers (2022-03-17T05:52:35Z) - Data-to-text Generation with Variational Sequential Planning [74.3955521225497]
We consider the task of data-to-text generation, which aims to create textual output from non-linguistic input.
We propose a neural model enhanced with a planning component responsible for organizing high-level information in a coherent and meaningful way.
We infer latent plans sequentially with a structured variational model, while interleaving the steps of planning and generation.
arXiv Detail & Related papers (2022-02-28T13:17:59Z) - Improving Generation and Evaluation of Visual Stories via Semantic
Consistency [72.00815192668193]
Given a series of natural language captions, an agent must generate a sequence of images that correspond to the captions.
Prior work has introduced recurrent generative models which outperform synthesis text-to-image models on this task.
We present a number of improvements to prior modeling approaches, including the addition of a dual learning framework.
arXiv Detail & Related papers (2021-05-20T20:42:42Z) - Data-to-text Generation with Macro Planning [61.265321323312286]
We propose a neural model with a macro planning stage followed by a generation stage reminiscent of traditional methods.
Our approach outperforms competitive baselines in terms of automatic and human evaluation.
arXiv Detail & Related papers (2021-02-04T16:32:57Z) - Outline to Story: Fine-grained Controllable Story Generation from
Cascaded Events [39.577220559911055]
We propose a new task named "Outline to Story" (O2S) as a test bed for fine-grained controllable generation of long text.
We then create datasets for future benchmarks, built by state-of-the-art keyword extraction techniques.
arXiv Detail & Related papers (2021-01-04T08:16:21Z) - XingGAN for Person Image Generation [149.54517767056382]
We propose a novel Generative Adversarial Network (XingGAN) for person image generation tasks.
XingGAN consists of two generation branches that model the person's appearance and shape information.
We show that the proposed XingGAN advances the state-of-the-art performance in terms of objective quantitative scores and subjective visual realness.
arXiv Detail & Related papers (2020-07-17T23:40:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.