Generalized Relation Learning with Semantic Correlation Awareness for
Link Prediction
- URL: http://arxiv.org/abs/2012.11957v2
- Date: Sun, 18 Apr 2021 08:57:36 GMT
- Title: Generalized Relation Learning with Semantic Correlation Awareness for
Link Prediction
- Authors: Yao Zhang, Xu Zhang, Jun Wang, Hongru Liang, Wenqiang Lei, Zhe Sun,
Adam Jatowt, Zhenglu Yang
- Abstract summary: We propose a unified Generalized Relation Learning framework to address the above two problems.
After training with GRL, the closeness of semantically similar relations in vector space and the discrimination of dissimilar relations are improved.
- Score: 29.23338194883254
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Developing link prediction models to automatically complete knowledge graphs
has recently been the focus of significant research interest. The current
methods for the link prediction taskhavetwonaturalproblems:1)the relation
distributions in KGs are usually unbalanced, and 2) there are many unseen
relations that occur in practical situations. These two problems limit the
training effectiveness and practical applications of the existing link
prediction models. We advocate a holistic understanding of KGs and we propose
in this work a unified Generalized Relation Learning framework GRL to address
the above two problems, which can be plugged into existing link prediction
models. GRL conducts a generalized relation learning, which is aware of
semantic correlations between relations that serve as a bridge to connect
semantically similar relations. After training with GRL, the closeness of
semantically similar relations in vector space and the discrimination of
dissimilar relations are improved. We perform comprehensive experiments on six
benchmarks to demonstrate the superior capability of GRL in the link prediction
task. In particular, GRL is found to enhance the existing link prediction
models making them insensitive to unbalanced relation distributions and capable
of learning unseen relations.
Related papers
- Multiple Relations Classification using Imbalanced Predictions
Adaptation [0.0]
The relation classification task assigns the proper semantic relation to a pair of subject and object entities.
Current relation classification models employ additional procedures to identify multiple relations in a single sentence.
We propose a multiple relations classification model that tackles these issues through a customized output architecture and by exploiting additional input features.
arXiv Detail & Related papers (2023-09-24T18:36:22Z) - Learning Complete Topology-Aware Correlations Between Relations for Inductive Link Prediction [121.65152276851619]
We show that semantic correlations between relations are inherently edge-level and entity-independent.
We propose a novel subgraph-based method, namely TACO, to model Topology-Aware COrrelations between relations.
To further exploit the potential of RCN, we propose Complete Common Neighbor induced subgraph.
arXiv Detail & Related papers (2023-09-20T08:11:58Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - Relation-dependent Contrastive Learning with Cluster Sampling for
Inductive Relation Prediction [30.404149577013595]
We introduce Relation-dependent Contrastive Learning (ReCoLe) for inductive relation prediction.
GNN-based encoder is optimized by contrastive learning, which ensures satisfactory performance on long-tail relations.
Experimental results suggest that ReCoLe outperforms state-of-the-art methods on commonly used inductive datasets.
arXiv Detail & Related papers (2022-11-22T13:30:49Z) - Link Prediction on N-ary Relational Data Based on Relatedness Evaluation [61.61555159755858]
We propose a method called NaLP to conduct link prediction on n-ary relational data.
We represent each n-ary relational fact as a set of its role and role-value pairs.
Experimental results validate the effectiveness and merits of the proposed methods.
arXiv Detail & Related papers (2021-04-21T09:06:54Z) - Topology-Aware Correlations Between Relations for Inductive Link
Prediction in Knowledge Graphs [41.38172189254483]
TACT is inspired by the observation that the semantic correlation between two relations is highly correlated to their topological knowledge graphs.
We categorize all relation pairs into several topological patterns then propose a structure in Correlation Network (RCN) to learn the importance of the different patterns for inductive link prediction.
Experiments demonstrate that TACT can effectively model semantic correlations between relations, and significantly outperforms existing state-of-the-art methods on benchmark datasets.
arXiv Detail & Related papers (2021-03-05T13:00:10Z) - One-shot Learning for Temporal Knowledge Graphs [49.41854171118697]
We propose a one-shot learning framework for link prediction in temporal knowledge graphs.
Our proposed method employs a self-attention mechanism to effectively encode temporal interactions between entities.
Our experiments show that the proposed algorithm outperforms the state of the art baselines for two well-studied benchmarks.
arXiv Detail & Related papers (2020-10-23T03:24:44Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - Improving Long-Tail Relation Extraction with Collaborating
Relation-Augmented Attention [63.26288066935098]
We propose a novel neural network, Collaborating Relation-augmented Attention (CoRA), to handle both the wrong labeling and long-tail relations.
In the experiments on the popular benchmark dataset NYT, the proposed CoRA improves the prior state-of-the-art performance by a large margin.
arXiv Detail & Related papers (2020-10-08T05:34:43Z) - Learning Relation Ties with a Force-Directed Graph in Distant Supervised
Relation Extraction [39.73191604776768]
Relation ties, defined as the correlation and mutual exclusion between different relations, are critical for distant supervised relation extraction.
Existing approaches model this property by greedily learning local dependencies.
We propose a novel force-directed graph based relation extraction model to comprehensively learn relation ties.
arXiv Detail & Related papers (2020-04-21T14:41:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.