Controllable Text Simplification with Explicit Paraphrasing
- URL: http://arxiv.org/abs/2010.11004v3
- Date: Wed, 14 Apr 2021 23:57:07 GMT
- Title: Controllable Text Simplification with Explicit Paraphrasing
- Authors: Mounica Maddela, Fernando Alva-Manchego, Wei Xu
- Abstract summary: Text Simplification improves the readability of sentences through several rewriting transformations, such as lexical paraphrasing, deletion, and splitting.
Current simplification systems are predominantly sequence-to-sequence models that are trained end-to-end to perform all these operations simultaneously.
We propose a novel hybrid approach that leverages linguistically-motivated rules for splitting and deletion, and couples them with a neural paraphrasing model to produce varied rewriting styles.
- Score: 88.02804405275785
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Text Simplification improves the readability of sentences through several
rewriting transformations, such as lexical paraphrasing, deletion, and
splitting. Current simplification systems are predominantly
sequence-to-sequence models that are trained end-to-end to perform all these
operations simultaneously. However, such systems limit themselves to mostly
deleting words and cannot easily adapt to the requirements of different target
audiences. In this paper, we propose a novel hybrid approach that leverages
linguistically-motivated rules for splitting and deletion, and couples them
with a neural paraphrasing model to produce varied rewriting styles. We
introduce a new data augmentation method to improve the paraphrasing capability
of our model. Through automatic and manual evaluations, we show that our
proposed model establishes a new state-of-the-art for the task, paraphrasing
more often than the existing systems, and can control the degree of each
simplification operation applied to the input texts.
Related papers
- On Conditional and Compositional Language Model Differentiable Prompting [75.76546041094436]
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) to perform well on downstream tasks.
We propose a new model, Prompt Production System (PRopS), which learns to transform task instructions or input metadata, into continuous prompts.
arXiv Detail & Related papers (2023-07-04T02:47:42Z) - Multi-Fact Correction in Abstractive Text Summarization [98.27031108197944]
Span-Fact is a suite of two factual correction models that leverages knowledge learned from question answering models to make corrections in system-generated summaries via span selection.
Our models employ single or multi-masking strategies to either iteratively or auto-regressively replace entities in order to ensure semantic consistency w.r.t. the source text.
Experiments show that our models significantly boost the factual consistency of system-generated summaries without sacrificing summary quality in terms of both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-10-06T02:51:02Z) - Neural Syntactic Preordering for Controlled Paraphrase Generation [57.5316011554622]
Our work uses syntactic transformations to softly "reorder'' the source sentence and guide our neural paraphrasing model.
First, given an input sentence, we derive a set of feasible syntactic rearrangements using an encoder-decoder model.
Next, we use each proposed rearrangement to produce a sequence of position embeddings, which encourages our final encoder-decoder paraphrase model to attend to the source words in a particular order.
arXiv Detail & Related papers (2020-05-05T09:02:25Z) - Improving Adversarial Text Generation by Modeling the Distant Future [155.83051741029732]
We consider a text planning scheme and present a model-based imitation-learning approach to alleviate the aforementioned issues.
We propose a novel guider network to focus on the generative process over a longer horizon, which can assist next-word prediction and provide intermediate rewards for generator optimization.
arXiv Detail & Related papers (2020-05-04T05:45:13Z) - ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification
Models with Multiple Rewriting Transformations [97.27005783856285]
This paper introduces ASSET, a new dataset for assessing sentence simplification in English.
We show that simplifications in ASSET are better at capturing characteristics of simplicity when compared to other standard evaluation datasets for the task.
arXiv Detail & Related papers (2020-05-01T16:44:54Z) - Syntax-driven Iterative Expansion Language Models for Controllable Text
Generation [2.578242050187029]
We propose a new paradigm for introducing a syntactic inductive bias into neural text generation.
Our experiments show that this paradigm is effective at text generation, with quality between LSTMs and Transformers, and comparable diversity.
arXiv Detail & Related papers (2020-04-05T14:29:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.