Non-Autoregressive Sentence Ordering
- URL: http://arxiv.org/abs/2310.12640v1
- Date: Thu, 19 Oct 2023 10:57:51 GMT
- Title: Non-Autoregressive Sentence Ordering
- Authors: Yi Bin, Wenhao Shi, Bin Ji, Jipeng Zhang, Yujuan Ding, Yang Yang
- Abstract summary: We propose a novel Non-Autoregressive Ordering Network, dubbed textitNAON, which explores bilateral dependencies between sentences and predicts the sentence for each position in parallel.
We conduct extensive experiments on several common-used datasets and the experimental results show that our method outperforms all the autoregressive approaches.
- Score: 22.45972496989434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing sentence ordering approaches generally employ encoder-decoder
frameworks with the pointer net to recover the coherence by recurrently
predicting each sentence step-by-step. Such an autoregressive manner only
leverages unilateral dependencies during decoding and cannot fully explore the
semantic dependency between sentences for ordering. To overcome these
limitations, in this paper, we propose a novel Non-Autoregressive Ordering
Network, dubbed \textit{NAON}, which explores bilateral dependencies between
sentences and predicts the sentence for each position in parallel. We claim
that the non-autoregressive manner is not just applicable but also particularly
suitable to the sentence ordering task because of two peculiar characteristics
of the task: 1) each generation target is in deterministic length, and 2) the
sentences and positions should match exclusively. Furthermore, to address the
repetition issue of the naive non-autoregressive Transformer, we introduce an
exclusive loss to constrain the exclusiveness between positions and sentences.
To verify the effectiveness of the proposed model, we conduct extensive
experiments on several common-used datasets and the experimental results show
that our method outperforms all the autoregressive approaches and yields
competitive performance compared with the state-of-the-arts. The codes are
available at:
\url{https://github.com/steven640pixel/nonautoregressive-sentence-ordering}.
Related papers
- Bipartite Graph Pre-training for Unsupervised Extractive Summarization
with Graph Convolutional Auto-Encoders [24.13261636386226]
We argue that utilizing pre-trained embeddings derived from a process specifically designed to optimize cohensive and distinctive sentence representations helps rank significant sentences.
We propose a novel graph pre-training auto-encoder to obtain sentence embeddings by explicitly modelling intra-sentential distinctive features and inter-sentential cohesive features.
arXiv Detail & Related papers (2023-10-29T12:27:18Z) - RankCSE: Unsupervised Sentence Representations Learning via Learning to
Rank [54.854714257687334]
We propose a novel approach, RankCSE, for unsupervised sentence representation learning.
It incorporates ranking consistency and ranking distillation with contrastive learning into a unified framework.
An extensive set of experiments are conducted on both semantic textual similarity (STS) and transfer (TR) tasks.
arXiv Detail & Related papers (2023-05-26T08:27:07Z) - Enhancing Coherence of Extractive Summarization with Multitask Learning [40.349019691412465]
This study proposes a multitask learning architecture for extractive summarization with coherence boosting.
The architecture contains an extractive summarizer and coherent discriminator module.
Experiments show that our proposed method significantly improves the proportion of consecutive sentences in the extracted summaries.
arXiv Detail & Related papers (2023-05-22T09:20:58Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - Unsupervised Extractive Summarization by Pre-training Hierarchical
Transformers [107.12125265675483]
Unsupervised extractive document summarization aims to select important sentences from a document without using labeled summaries during training.
Existing methods are mostly graph-based with sentences as nodes and edge weights measured by sentence similarities.
We find that transformer attentions can be used to rank sentences for unsupervised extractive summarization.
arXiv Detail & Related papers (2020-10-16T08:44:09Z) - Neural Syntactic Preordering for Controlled Paraphrase Generation [57.5316011554622]
Our work uses syntactic transformations to softly "reorder'' the source sentence and guide our neural paraphrasing model.
First, given an input sentence, we derive a set of feasible syntactic rearrangements using an encoder-decoder model.
Next, we use each proposed rearrangement to produce a sequence of position embeddings, which encourages our final encoder-decoder paraphrase model to attend to the source words in a particular order.
arXiv Detail & Related papers (2020-05-05T09:02:25Z) - Pseudo-Convolutional Policy Gradient for Sequence-to-Sequence
Lip-Reading [96.48553941812366]
Lip-reading aims to infer the speech content from the lip movement sequence.
Traditional learning process of seq2seq models suffers from two problems.
We propose a novel pseudo-convolutional policy gradient (PCPG) based method to address these two problems.
arXiv Detail & Related papers (2020-03-09T09:12:26Z) - Fact-aware Sentence Split and Rephrase with Permutation Invariant
Training [93.66323661321113]
Sentence Split and Rephrase aims to break down a complex sentence into several simple sentences with its meaning preserved.
Previous studies tend to address the issue by seq2seq learning from parallel sentence pairs.
We introduce Permutation Training to verifies the effects of order variance in seq2seq learning for this task.
arXiv Detail & Related papers (2020-01-16T07:30:19Z) - Revisiting Paraphrase Question Generator using Pairwise Discriminator [25.449902612898594]
We propose a novel method for obtaining sentence-level embeddings.
The proposed method results in semantic embeddings and outperforms the state-of-the-art on the paraphrase generation and sentiment analysis task.
arXiv Detail & Related papers (2019-12-31T02:46:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.