Enriched Attention for Robust Relation Extraction
- URL: http://arxiv.org/abs/2104.10899v1
- Date: Thu, 22 Apr 2021 07:17:19 GMT
- Title: Enriched Attention for Robust Relation Extraction
- Authors: Heike Adel, Jannik Str\"otgen
- Abstract summary: relation extraction models do not scale well to long sentences with multiple entities and relations.
Attention allows the model to focus on parts of the input sentence that are relevant to relation extraction.
Our model outperforms prior work using comparable setups on two popular benchmarks.
- Score: 10.925904231385207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of relation extraction models has increased considerably with
the rise of neural networks. However, a key issue of neural relation extraction
is robustness: the models do not scale well to long sentences with multiple
entities and relations. In this work, we address this problem with an enriched
attention mechanism. Attention allows the model to focus on parts of the input
sentence that are relevant to relation extraction. We propose to enrich the
attention function with features modeling knowledge about the relation
arguments and the shortest dependency path between them. Thus, for different
relation arguments, the model can pay attention to different parts of the
sentence. Our model outperforms prior work using comparable setups on two
popular benchmarks, and our analysis confirms that it indeed scales to long
sentences with many entities.
Related papers
- Entity-Aware Self-Attention and Contextualized GCN for Enhanced Relation Extraction in Long Sentences [5.453850739960517]
We propose a novel model, Entity-aware Self-attention Contextualized GCN (ESC-GCN), which efficiently incorporates syntactic structure of input sentences and semantic context of sequences.
Our model achieves encouraging performance as compared to existing dependency-based and sequence-based models.
arXiv Detail & Related papers (2024-09-15T10:50:51Z) - Sparse Relational Reasoning with Object-Centric Representations [78.83747601814669]
We investigate the composability of soft-rules learned by relational neural architectures when operating over object-centric representations.
We find that increasing sparsity, especially on features, improves the performance of some models and leads to simpler relations.
arXiv Detail & Related papers (2022-07-15T14:57:33Z) - Temporal Relevance Analysis for Video Action Models [70.39411261685963]
We first propose a new approach to quantify the temporal relationships between frames captured by CNN-based action models.
We then conduct comprehensive experiments and in-depth analysis to provide a better understanding of how temporal modeling is affected.
arXiv Detail & Related papers (2022-04-25T19:06:48Z) - Temporal Relation Extraction with a Graph-Based Deep Biaffine Attention
Model [0.0]
We propose a novel temporal information extraction model based on deep biaffine attention.
We experimentally demonstrate that our model achieves state-of-the-art performance in temporal relation extraction.
arXiv Detail & Related papers (2022-01-16T19:40:08Z) - A Trigger-Sense Memory Flow Framework for Joint Entity and Relation
Extraction [5.059120569845976]
We present a Trigger-Sense Memory Flow Framework (TriMF) for joint entity and relation extraction.
We build a memory module to remember category representations learned in entity recognition and relation extraction tasks.
We also design a multi-level memory flow attention mechanism to enhance the bi-directional interaction between entity recognition and relation extraction.
arXiv Detail & Related papers (2021-01-25T16:24:04Z) - RH-Net: Improving Neural Relation Extraction via Reinforcement Learning
and Hierarchical Relational Searching [2.1828601975620257]
We propose a novel framework named RH-Net, which utilizes Reinforcement learning and Hierarchical relational searching module to improve relation extraction.
We then propose the hierarchical relational searching module to share the semantics from correlative instances between data-rich and data-poor classes.
arXiv Detail & Related papers (2020-10-27T12:50:27Z) - Understanding Neural Abstractive Summarization Models via Uncertainty [54.37665950633147]
seq2seq abstractive summarization models generate text in a free-form manner.
We study the entropy, or uncertainty, of the model's token-level predictions.
We show that uncertainty is a useful perspective for analyzing summarization and text generation models more broadly.
arXiv Detail & Related papers (2020-10-15T16:57:27Z) - High-order Semantic Role Labeling [86.29371274587146]
This paper introduces a high-order graph structure for the neural semantic role labeling model.
It enables the model to explicitly consider not only the isolated predicate-argument pairs but also the interaction between the predicate-argument pairs.
Experimental results on 7 languages of the CoNLL-2009 benchmark show that the high-order structural learning techniques are beneficial to the strong performing SRL models.
arXiv Detail & Related papers (2020-10-09T15:33:54Z) - Improving Long-Tail Relation Extraction with Collaborating
Relation-Augmented Attention [63.26288066935098]
We propose a novel neural network, Collaborating Relation-augmented Attention (CoRA), to handle both the wrong labeling and long-tail relations.
In the experiments on the popular benchmark dataset NYT, the proposed CoRA improves the prior state-of-the-art performance by a large margin.
arXiv Detail & Related papers (2020-10-08T05:34:43Z) - Learning Causal Models Online [103.87959747047158]
Predictive models can rely on spurious correlations in the data for making predictions.
One solution for achieving strong generalization is to incorporate causal structures in the models.
We propose an online algorithm that continually detects and removes spurious features.
arXiv Detail & Related papers (2020-06-12T20:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.