Neural Sentence Ordering Based on Constraint Graphs
- URL: http://arxiv.org/abs/2101.11178v2
- Date: Thu, 28 Jan 2021 14:22:46 GMT
- Title: Neural Sentence Ordering Based on Constraint Graphs
- Authors: Yutao Zhu, Kun Zhou, Jian-Yun Nie, Shengchao Liu, Zhicheng Dou
- Abstract summary: Sentence ordering aims at arranging a list of sentences in the correct order.
We devise a new approach based on multi-granular orders between sentences.
These orders form multiple constraint graphs, which are then encoded by Graph Isomorphism Networks and fused into sentence representations.
- Score: 32.14555157902546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sentence ordering aims at arranging a list of sentences in the correct order.
Based on the observation that sentence order at different distances may rely on
different types of information, we devise a new approach based on
multi-granular orders between sentences. These orders form multiple constraint
graphs, which are then encoded by Graph Isomorphism Networks and fused into
sentence representations. Finally, sentence order is determined using the
order-enhanced sentence representations. Our experiments on five benchmark
datasets show that our method outperforms all the existing baselines
significantly, achieving a new state-of-the-art performance. The results
demonstrate the advantage of considering multiple types of order information
and using graph neural networks to integrate sentence content and order
information for the task. Our code is available at
https://github.com/DaoD/ConstraintGraph4NSO.
Related papers
- Pruned Graph Neural Network for Short Story Ordering [0.7087237546722617]
Organizing sentences into an order that maximizes coherence is known as sentence ordering.
We propose a new method for constructing sentence-entity graphs of short stories to create the edges between sentences.
We also observe that replacing pronouns with their referring entities effectively encodes sentences in sentence-entity graphs.
arXiv Detail & Related papers (2022-03-13T22:25:17Z) - Reinforcement Learning Based Query Vertex Ordering Model for Subgraph
Matching [58.39970828272366]
Subgraph matching algorithms enumerate all is embeddings of a query graph in a data graph G.
matching order plays a critical role in time efficiency of these backtracking based subgraph matching algorithms.
In this paper, for the first time we apply the Reinforcement Learning (RL) and Graph Neural Networks (GNNs) techniques to generate the high-quality matching order for subgraph matching algorithms.
arXiv Detail & Related papers (2022-01-25T00:10:03Z) - Discovering Non-monotonic Autoregressive Orderings with Variational
Inference [67.27561153666211]
We develop an unsupervised parallelizable learner that discovers high-quality generation orders purely from training data.
We implement the encoder as a Transformer with non-causal attention that outputs permutations in one forward pass.
Empirical results in language modeling tasks demonstrate that our method is context-aware and discovers orderings that are competitive with or even better than fixed orders.
arXiv Detail & Related papers (2021-10-27T16:08:09Z) - Improving Graph-based Sentence Ordering with Iteratively Predicted
Pairwise Orderings [38.91604447717656]
We propose a novel sentence ordering framework which introduces two classifiers to make better use of pairwise orderings for graph-based sentence ordering.
Our model achieves state-of-the-art performance when equipped with BERT and FHDecoder.
arXiv Detail & Related papers (2021-10-13T02:18:16Z) - STaCK: Sentence Ordering with Temporal Commonsense Knowledge [34.64198104134244]
Sentence order prediction is the task of finding the correct order of sentences in a randomly ordered document.
We introduce STaCK -- a framework based on graph neural networks and temporal commonsense knowledge.
We report results on five different datasets, and empirically show that the proposed method is naturally suitable for order prediction.
arXiv Detail & Related papers (2021-09-06T05:29:48Z) - Using BERT Encoding and Sentence-Level Language Model for Sentence
Ordering [0.9134244356393667]
We propose an algorithm for sentence ordering in a corpus of short stories.
Our proposed method uses a language model based on Universal Transformers (UT) that captures sentences' dependencies by employing an attention mechanism.
The proposed model includes three components: Sentence, Language Model, and Sentence Arrangement with Brute Force Search.
arXiv Detail & Related papers (2021-08-24T23:03:36Z) - InsertGNN: Can Graph Neural Networks Outperform Humans in TOEFL Sentence
Insertion Problem? [66.70154236519186]
Sentence insertion is a delicate but fundamental NLP problem.
Current approaches in sentence ordering, text coherence, and question answering (QA) are neither suitable nor good at solving it.
We propose InsertGNN, a model that represents the problem as a graph and adopts the graph Neural Network (GNN) to learn the connection between sentences.
arXiv Detail & Related papers (2021-03-28T06:50:31Z) - BERT4SO: Neural Sentence Ordering by Fine-tuning BERT [26.050527288844005]
Recent work frames it as a ranking problem and applies deep neural networks to it.
We propose a new method, named BERT4SO, by fine-tuning BERT for sentence ordering.
arXiv Detail & Related papers (2021-03-25T03:32:32Z) - Graph-to-Sequence Neural Machine Translation [79.0617920270817]
We propose a graph-based SAN-based NMT model called Graph-Transformer.
Subgraphs are put into different groups according to their orders, and every group of subgraphs respectively reflect different levels of dependency between words.
Our method can effectively boost the Transformer with an improvement of 1.1 BLEU points on WMT14 English-German dataset and 1.0 BLEU points on IWSLT14 German-English dataset.
arXiv Detail & Related papers (2020-09-16T06:28:58Z) - Topological Sort for Sentence Ordering [133.05105352571715]
We propose a new framing of this task as a constraint solving problem and introduce a new technique to solve it.
The results on both automatic and human metrics across four different datasets show that this new technique is better at capturing coherence in documents.
arXiv Detail & Related papers (2020-05-01T15:07:59Z) - Fact-aware Sentence Split and Rephrase with Permutation Invariant
Training [93.66323661321113]
Sentence Split and Rephrase aims to break down a complex sentence into several simple sentences with its meaning preserved.
Previous studies tend to address the issue by seq2seq learning from parallel sentence pairs.
We introduce Permutation Training to verifies the effects of order variance in seq2seq learning for this task.
arXiv Detail & Related papers (2020-01-16T07:30:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.