Towards Document-Level Paraphrase Generation with Sentence Rewriting and
Reordering
- URL: http://arxiv.org/abs/2109.07095v1
- Date: Wed, 15 Sep 2021 05:53:40 GMT
- Title: Towards Document-Level Paraphrase Generation with Sentence Rewriting and
Reordering
- Authors: Zhe Lin, Yitao Cai and Xiaojun Wan
- Abstract summary: We propose CoRPG (Coherence Relationship guided Paraphrase Generation) for document-level paraphrase generation.
We use graph GRU to encode the coherence relationship graph and get the coherence-aware representation for each sentence.
Our model can generate document paraphrase with more diversity and semantic preservation.
- Score: 88.08581016329398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Paraphrase generation is an important task in natural language processing.
Previous works focus on sentence-level paraphrase generation, while ignoring
document-level paraphrase generation, which is a more challenging and valuable
task. In this paper, we explore the task of document-level paraphrase
generation for the first time and focus on the inter-sentence diversity by
considering sentence rewriting and reordering. We propose CoRPG (Coherence
Relationship guided Paraphrase Generation), which leverages graph GRU to encode
the coherence relationship graph and get the coherence-aware representation for
each sentence, which can be used for re-arranging the multiple (possibly
modified) input sentences. We create a pseudo document-level paraphrase dataset
for training CoRPG. Automatic evaluation results show CoRPG outperforms several
strong baseline models on the BERTScore and diversity scores. Human evaluation
also shows our model can generate document paraphrase with more diversity and
semantic preservation.
Related papers
- Retrieval is Accurate Generation [99.24267226311157]
We introduce a novel method that selects context-aware phrases from a collection of supporting documents.
Our model achieves the best performance and the lowest latency among several retrieval-augmented baselines.
arXiv Detail & Related papers (2024-02-27T14:16:19Z) - ParaAMR: A Large-Scale Syntactically Diverse Paraphrase Dataset by AMR
Back-Translation [59.91139600152296]
ParaAMR is a large-scale syntactically diverse paraphrase dataset created by abstract meaning representation back-translation.
We show that ParaAMR can be used to improve on three NLP tasks: learning sentence embeddings, syntactically controlled paraphrase generation, and data augmentation for few-shot learning.
arXiv Detail & Related papers (2023-05-26T02:27:33Z) - Pushing Paraphrase Away from Original Sentence: A Multi-Round Paraphrase
Generation Approach [97.38622477085188]
We propose BTmPG (Back-Translation guided multi-round Paraphrase Generation) to improve diversity of paraphrase.
We evaluate BTmPG on two benchmark datasets.
arXiv Detail & Related papers (2021-09-04T13:12:01Z) - Unsupervised Deep Keyphrase Generation [14.544869226959612]
Keyphrase generation aims to summarize long documents with a collection of salient phrases.
Deep neural models have demonstrated a remarkable success in this task, capable of predicting keyphrases that are even absent from a document.
We present a novel method for keyphrase generation, AutoKeyGen, without the supervision of any human annotation.
arXiv Detail & Related papers (2021-04-18T05:53:19Z) - Pre-training via Paraphrasing [96.79972492585112]
We introduce MARGE, a pre-trained sequence-to-sequence model learned with an unsupervised multi-lingual paraphrasing objective.
We show it is possible to jointly learn to do retrieval and reconstruction, given only a random initialization.
For example, with no additional task-specific training we achieve BLEU scores of up to 35.8 for document translation.
arXiv Detail & Related papers (2020-06-26T14:43:43Z) - Unsupervised Paraphrase Generation using Pre-trained Language Models [0.0]
OpenAI's GPT-2 is notable for its capability to generate fluent, well formulated, grammatically consistent text.
We leverage this generation capability of GPT-2 to generate paraphrases without any supervision from labelled data.
Our experiments show that paraphrases generated with our model are of good quality, are diverse and improves the downstream task performance when used for data augmentation.
arXiv Detail & Related papers (2020-06-09T19:40:19Z) - Neural Syntactic Preordering for Controlled Paraphrase Generation [57.5316011554622]
Our work uses syntactic transformations to softly "reorder'' the source sentence and guide our neural paraphrasing model.
First, given an input sentence, we derive a set of feasible syntactic rearrangements using an encoder-decoder model.
Next, we use each proposed rearrangement to produce a sequence of position embeddings, which encourages our final encoder-decoder paraphrase model to attend to the source words in a particular order.
arXiv Detail & Related papers (2020-05-05T09:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.