Improving Graph-based Sentence Ordering with Iteratively Predicted
Pairwise Orderings
- URL: http://arxiv.org/abs/2110.06446v1
- Date: Wed, 13 Oct 2021 02:18:16 GMT
- Title: Improving Graph-based Sentence Ordering with Iteratively Predicted
Pairwise Orderings
- Authors: Shaopeng Lai, Ante Wang, Fandong Meng, Jie Zhou, Yubin Ge, Jiali Zeng,
Junfeng Yao, Degen Huang and Jinsong Su
- Abstract summary: We propose a novel sentence ordering framework which introduces two classifiers to make better use of pairwise orderings for graph-based sentence ordering.
Our model achieves state-of-the-art performance when equipped with BERT and FHDecoder.
- Score: 38.91604447717656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dominant sentence ordering models can be classified into pairwise ordering
models and set-to-sequence models. However, there is little attempt to combine
these two types of models, which inituitively possess complementary advantages.
In this paper, we propose a novel sentence ordering framework which introduces
two classifiers to make better use of pairwise orderings for graph-based
sentence ordering. Specially, given an initial sentence-entity graph, we first
introduce a graph-based classifier to predict pairwise orderings between linked
sentences. Then, in an iterative manner, based on the graph updated by
previously predicted high-confident pairwise orderings, another classifier is
used to predict the remaining uncertain pairwise orderings. At last, we adapt a
GRN-based sentence ordering model on the basis of final graph. Experiments on
five commonly-used datasets demonstrate the effectiveness and generality of our
model. Particularly, when equipped with BERT and FHDecoder, our model achieves
state-of-the-art performance.
Related papers
- Ensemble Predicate Decoding for Unbiased Scene Graph Generation [40.01591739856469]
Scene Graph Generation (SGG) aims to generate a comprehensive graphical representation that captures semantic information of a given scenario.
The model's performance in predicting more fine-grained predicates is hindered by a significant predicate bias.
This paper proposes Ensemble Predicate Decoding (EPD), which employs multiple decoders to attain unbiased scene graph generation.
arXiv Detail & Related papers (2024-08-26T11:24:13Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Mutual Exclusivity Training and Primitive Augmentation to Induce
Compositionality [84.94877848357896]
Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models.
We analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias and the tendency to memorize whole examples.
We show substantial empirical improvements using standard sequence-to-sequence models on two widely-used compositionality datasets.
arXiv Detail & Related papers (2022-11-28T17:36:41Z) - Reinforcement Learning Based Query Vertex Ordering Model for Subgraph
Matching [58.39970828272366]
Subgraph matching algorithms enumerate all is embeddings of a query graph in a data graph G.
matching order plays a critical role in time efficiency of these backtracking based subgraph matching algorithms.
In this paper, for the first time we apply the Reinforcement Learning (RL) and Graph Neural Networks (GNNs) techniques to generate the high-quality matching order for subgraph matching algorithms.
arXiv Detail & Related papers (2022-01-25T00:10:03Z) - Joint Graph Learning and Matching for Semantic Feature Correspondence [69.71998282148762]
We propose a joint emphgraph learning and matching network, named GLAM, to explore reliable graph structures for boosting graph matching.
The proposed method is evaluated on three popular visual matching benchmarks (Pascal VOC, Willow Object and SPair-71k)
It outperforms previous state-of-the-art graph matching methods by significant margins on all benchmarks.
arXiv Detail & Related papers (2021-09-01T08:24:02Z) - Order Matters: Probabilistic Modeling of Node Sequence for Graph
Generation [18.03898476141173]
A graph generative model defines a distribution over graphs.
We derive the exact joint probability over the graph and the node ordering of the sequential process.
We train graph generative models by maximizing this bound, without using the ad-hoc node orderings of previous methods.
arXiv Detail & Related papers (2021-06-11T06:37:52Z) - Stochastic Iterative Graph Matching [11.128153575173213]
We propose a new model, Iterative Graph MAtching, to address the graph matching problem.
Our model defines a distribution of matchings for a graph pair so the model can explore a wide range of possible matchings.
We conduct extensive experiments across synthetic graph datasets as well as biochemistry and computer vision applications.
arXiv Detail & Related papers (2021-06-04T02:05:35Z) - Predicting Sequences of Traversed Nodes in Graphs using Network Models
with Multiple Higher Orders [1.0499611180329802]
We develop a technique to fit such multi-order models in empirical sequential data and to select the optimal maximum order.
We evaluate our model based on six empirical data sets containing sequences from website navigation as well as public transport systems.
We further demonstrate the accuracy of our method during out-of-sample sequence prediction and validate that our method can scale to data sets with millions of sequences.
arXiv Detail & Related papers (2020-07-13T20:08:14Z) - Document Ranking with a Pretrained Sequence-to-Sequence Model [56.44269917346376]
We show how a sequence-to-sequence model can be trained to generate relevance labels as "target words"
Our approach significantly outperforms an encoder-only model in a data-poor regime.
arXiv Detail & Related papers (2020-03-14T22:29:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.