Improving Long-Tail Relation Extraction with Collaborating
Relation-Augmented Attention
- URL: http://arxiv.org/abs/2010.03773v2
- Date: Mon, 2 Nov 2020 02:38:44 GMT
- Title: Improving Long-Tail Relation Extraction with Collaborating
Relation-Augmented Attention
- Authors: Yang Li, Tao Shen, Guodong Long, Jing Jiang, Tianyi Zhou, Chengqi
Zhang
- Abstract summary: We propose a novel neural network, Collaborating Relation-augmented Attention (CoRA), to handle both the wrong labeling and long-tail relations.
In the experiments on the popular benchmark dataset NYT, the proposed CoRA improves the prior state-of-the-art performance by a large margin.
- Score: 63.26288066935098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wrong labeling problem and long-tail relations are two main challenges caused
by distant supervision in relation extraction. Recent works alleviate the wrong
labeling by selective attention via multi-instance learning, but cannot well
handle long-tail relations even if hierarchies of the relations are introduced
to share knowledge. In this work, we propose a novel neural network,
Collaborating Relation-augmented Attention (CoRA), to handle both the wrong
labeling and long-tail relations. Particularly, we first propose
relation-augmented attention network as base model. It operates on sentence bag
with a sentence-to-relation attention to minimize the effect of wrong labeling.
Then, facilitated by the proposed base model, we introduce collaborating
relation features shared among relations in the hierarchies to promote the
relation-augmenting process and balance the training data for long-tail
relations. Besides the main training objective to predict the relation of a
sentence bag, an auxiliary objective is utilized to guide the
relation-augmenting process for a more accurate bag-level representation. In
the experiments on the popular benchmark dataset NYT, the proposed CoRA
improves the prior state-of-the-art performance by a large margin in terms of
Precision@N, AUC and Hits@K. Further analyses verify its superior capability in
handling long-tail relations in contrast to the competitors.
Related papers
- Learning Complete Topology-Aware Correlations Between Relations for Inductive Link Prediction [121.65152276851619]
We show that semantic correlations between relations are inherently edge-level and entity-independent.
We propose a novel subgraph-based method, namely TACO, to model Topology-Aware COrrelations between relations.
To further exploit the potential of RCN, we propose Complete Common Neighbor induced subgraph.
arXiv Detail & Related papers (2023-09-20T08:11:58Z) - Document-level Relation Extraction with Relation Correlations [15.997345900917058]
Document-level relation extraction faces two overlooked challenges: long-tail problem and multi-label problem.
We analyze the co-occurrence correlation of relations, and introduce it into DocRE task for the first time.
arXiv Detail & Related papers (2022-12-20T11:17:52Z) - Relation-dependent Contrastive Learning with Cluster Sampling for
Inductive Relation Prediction [30.404149577013595]
We introduce Relation-dependent Contrastive Learning (ReCoLe) for inductive relation prediction.
GNN-based encoder is optimized by contrastive learning, which ensures satisfactory performance on long-tail relations.
Experimental results suggest that ReCoLe outperforms state-of-the-art methods on commonly used inductive datasets.
arXiv Detail & Related papers (2022-11-22T13:30:49Z) - Improving Long Tailed Document-Level Relation Extraction via Easy
Relation Augmentation and Contrastive Learning [66.83982926437547]
We argue that mitigating the long-tailed distribution problem is crucial for DocRE in the real-world scenario.
Motivated by the long-tailed distribution problem, we propose an Easy Relation Augmentation(ERA) method for improving DocRE.
arXiv Detail & Related papers (2022-05-21T06:15:11Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Distantly-Supervised Long-Tailed Relation Extraction Using Constraint
Graphs [16.671606030727975]
In this paper, we introduce constraint graphs to model the dependencies between relation labels.
We also propose a novel constraint graph-based relation extraction framework(CGRE) to handle the two challenges simultaneously.
CGRE employs graph convolution networks (GCNs) to propagate information from data-rich relation nodes to data-poor relation nodes.
arXiv Detail & Related papers (2021-05-24T12:02:32Z) - Distantly Supervised Relation Extraction via Recursive
Hierarchy-Interactive Attention and Entity-Order Perception [3.8651116146455533]
In a sentence, the appearance order of two entities contributes to the understanding of its semantics.
We introduce a newfangled training objective, called Entity-Order Perception (EOP), to make the sentence encoder retain more entity appearance information.
Our approach achieves state-of-the-art performance in terms of precision-recall (P-R) curves, AUC, Top-N precision and other evaluation metrics.
arXiv Detail & Related papers (2021-05-18T00:45:25Z) - Learning Relation Prototype from Unlabeled Texts for Long-tail Relation
Extraction [84.64435075778988]
We propose a general approach to learn relation prototypes from unlabeled texts.
We learn relation prototypes as an implicit factor between entities.
We conduct experiments on two publicly available datasets: New York Times and Google Distant Supervision.
arXiv Detail & Related papers (2020-11-27T06:21:12Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.