Pruned Graph Neural Network for Short Story Ordering
- URL: http://arxiv.org/abs/2203.06778v1
- Date: Sun, 13 Mar 2022 22:25:17 GMT
- Title: Pruned Graph Neural Network for Short Story Ordering
- Authors: Melika Golestani, Zeinab Borhanifard, Farnaz Tahmasebian, and Heshaam
Faili
- Abstract summary: Organizing sentences into an order that maximizes coherence is known as sentence ordering.
We propose a new method for constructing sentence-entity graphs of short stories to create the edges between sentences.
We also observe that replacing pronouns with their referring entities effectively encodes sentences in sentence-entity graphs.
- Score: 0.7087237546722617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text coherence is a fundamental problem in natural language generation and
understanding. Organizing sentences into an order that maximizes coherence is
known as sentence ordering. This paper is proposing a new approach based on the
graph neural network approach to encode a set of sentences and learn orderings
of short stories. We propose a new method for constructing sentence-entity
graphs of short stories to create the edges between sentences and reduce noise
in our graph by replacing the pronouns with their referring entities. We
improve the sentence ordering by introducing an aggregation method based on
majority voting of state-of-the-art methods and our proposed one. Our approach
employs a BERT-based model to learn semantic representations of the sentences.
The results demonstrate that the proposed method significantly outperforms
existing baselines on a corpus of short stories with a new state-of-the-art
performance in terms of Perfect Match Ratio (PMR) and Kendall's Tau (Tau)
metrics. More precisely, our method increases PMR and Tau criteria by more than
5% and 4.3%, respectively. These outcomes highlight the benefit of forming the
edges between sentences based on their cosine similarity. We also observe that
replacing pronouns with their referring entities effectively encodes sentences
in sentence-entity graphs.
Related papers
- DenoSent: A Denoising Objective for Self-Supervised Sentence
Representation Learning [59.4644086610381]
We propose a novel denoising objective that inherits from another perspective, i.e., the intra-sentence perspective.
By introducing both discrete and continuous noise, we generate noisy sentences and then train our model to restore them to their original form.
Our empirical evaluations demonstrate that this approach delivers competitive results on both semantic textual similarity (STS) and a wide range of transfer tasks.
arXiv Detail & Related papers (2024-01-24T17:48:45Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - Hierarchical Heterogeneous Graph Representation Learning for Short Text
Classification [60.233529926965836]
We propose a new method called SHINE, which is based on graph neural network (GNN) for short text classification.
First, we model the short text dataset as a hierarchical heterogeneous graph consisting of word-level component graphs.
Then, we dynamically learn a short document graph that facilitates effective label propagation among similar short texts.
arXiv Detail & Related papers (2021-10-30T05:33:05Z) - Sentence Structure and Word Relationship Modeling for Emphasis Selection [33.71757542373714]
Emphasis Selection is a newly proposed task which focuses on choosing words for emphasis in short sentences.
Traditional methods only consider the sequence information of a sentence while ignoring the rich sentence structure and word relationship information.
In this paper, we propose a new framework that considers sentence structure via a sentence structure graph and word relationship via a word similarity graph.
arXiv Detail & Related papers (2021-08-29T04:43:25Z) - Using BERT Encoding and Sentence-Level Language Model for Sentence
Ordering [0.9134244356393667]
We propose an algorithm for sentence ordering in a corpus of short stories.
Our proposed method uses a language model based on Universal Transformers (UT) that captures sentences' dependencies by employing an attention mechanism.
The proposed model includes three components: Sentence, Language Model, and Sentence Arrangement with Brute Force Search.
arXiv Detail & Related papers (2021-08-24T23:03:36Z) - Three Sentences Are All You Need: Local Path Enhanced Document Relation
Extraction [54.95848026576076]
We present an embarrassingly simple but effective method to select evidence sentences for document-level RE.
We have released our code at https://github.com/AndrewZhe/Three-Sentences-Are-All-You-Need.
arXiv Detail & Related papers (2021-06-03T12:29:40Z) - InsertGNN: Can Graph Neural Networks Outperform Humans in TOEFL Sentence
Insertion Problem? [66.70154236519186]
Sentence insertion is a delicate but fundamental NLP problem.
Current approaches in sentence ordering, text coherence, and question answering (QA) are neither suitable nor good at solving it.
We propose InsertGNN, a model that represents the problem as a graph and adopts the graph Neural Network (GNN) to learn the connection between sentences.
arXiv Detail & Related papers (2021-03-28T06:50:31Z) - BERT4SO: Neural Sentence Ordering by Fine-tuning BERT [26.050527288844005]
Recent work frames it as a ranking problem and applies deep neural networks to it.
We propose a new method, named BERT4SO, by fine-tuning BERT for sentence ordering.
arXiv Detail & Related papers (2021-03-25T03:32:32Z) - Neural Sentence Ordering Based on Constraint Graphs [32.14555157902546]
Sentence ordering aims at arranging a list of sentences in the correct order.
We devise a new approach based on multi-granular orders between sentences.
These orders form multiple constraint graphs, which are then encoded by Graph Isomorphism Networks and fused into sentence representations.
arXiv Detail & Related papers (2021-01-27T02:53:10Z) - Unsupervised Extractive Summarization by Pre-training Hierarchical
Transformers [107.12125265675483]
Unsupervised extractive document summarization aims to select important sentences from a document without using labeled summaries during training.
Existing methods are mostly graph-based with sentences as nodes and edge weights measured by sentence similarities.
We find that transformer attentions can be used to rank sentences for unsupervised extractive summarization.
arXiv Detail & Related papers (2020-10-16T08:44:09Z) - Inducing Alignment Structure with Gated Graph Attention Networks for
Sentence Matching [24.02847802702168]
This paper proposes a graph-based approach for sentence matching.
We represent a sentence pair as a graph with several carefully design strategies.
We then employ a novel gated graph attention network to encode the constructed graph for sentence matching.
arXiv Detail & Related papers (2020-10-15T11:25:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.