Relation-dependent Contrastive Learning with Cluster Sampling for
Inductive Relation Prediction
- URL: http://arxiv.org/abs/2211.12266v1
- Date: Tue, 22 Nov 2022 13:30:49 GMT
- Title: Relation-dependent Contrastive Learning with Cluster Sampling for
Inductive Relation Prediction
- Authors: Jianfeng Wu, Sijie Mai, Haifeng Hu
- Abstract summary: We introduce Relation-dependent Contrastive Learning (ReCoLe) for inductive relation prediction.
GNN-based encoder is optimized by contrastive learning, which ensures satisfactory performance on long-tail relations.
Experimental results suggest that ReCoLe outperforms state-of-the-art methods on commonly used inductive datasets.
- Score: 30.404149577013595
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Relation prediction is a task designed for knowledge graph completion which
aims to predict missing relationships between entities. Recent subgraph-based
models for inductive relation prediction have received increasing attention,
which can predict relation for unseen entities based on the extracted subgraph
surrounding the candidate triplet. However, they are not completely inductive
because of their disability of predicting unseen relations. Moreover, they fail
to pay sufficient attention to the role of relation as they only depend on the
model to learn parameterized relation embedding, which leads to inaccurate
prediction on long-tail relations. In this paper, we introduce
Relation-dependent Contrastive Learning (ReCoLe) for inductive relation
prediction, which adapts contrastive learning with a novel sampling method
based on clustering algorithm to enhance the role of relation and improve the
generalization ability to unseen relations. Instead of directly learning
embedding for relations, ReCoLe allocates a pre-trained GNN-based encoder to
each relation to strengthen the influence of relation. The GNN-based encoder is
optimized by contrastive learning, which ensures satisfactory performance on
long-tail relations. In addition, the cluster sampling method equips ReCoLe
with the ability to handle both unseen relations and entities. Experimental
results suggest that ReCoLe outperforms state-of-the-art methods on commonly
used inductive datasets.
Related papers
- Learning Latent Graph Structures and their Uncertainty [63.95971478893842]
Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy.
As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task.
arXiv Detail & Related papers (2024-05-30T10:49:22Z) - Learning Complete Topology-Aware Correlations Between Relations for Inductive Link Prediction [121.65152276851619]
We show that semantic correlations between relations are inherently edge-level and entity-independent.
We propose a novel subgraph-based method, namely TACO, to model Topology-Aware COrrelations between relations.
To further exploit the potential of RCN, we propose Complete Common Neighbor induced subgraph.
arXiv Detail & Related papers (2023-09-20T08:11:58Z) - Document-level Relation Extraction with Relation Correlations [15.997345900917058]
Document-level relation extraction faces two overlooked challenges: long-tail problem and multi-label problem.
We analyze the co-occurrence correlation of relations, and introduce it into DocRE task for the first time.
arXiv Detail & Related papers (2022-12-20T11:17:52Z) - A Probit Tensor Factorization Model For Relational Learning [31.613211987639296]
We propose a binary tensor factorization model with probit link, which inherits the computation efficiency from the classic tensor factorization model.
Our proposed probit tensor factorization (PTF) model shows advantages in both the prediction accuracy and interpretability.
arXiv Detail & Related papers (2021-11-06T19:23:07Z) - Link Prediction on N-ary Relational Data Based on Relatedness Evaluation [61.61555159755858]
We propose a method called NaLP to conduct link prediction on n-ary relational data.
We represent each n-ary relational fact as a set of its role and role-value pairs.
Experimental results validate the effectiveness and merits of the proposed methods.
arXiv Detail & Related papers (2021-04-21T09:06:54Z) - Topology-Aware Correlations Between Relations for Inductive Link
Prediction in Knowledge Graphs [41.38172189254483]
TACT is inspired by the observation that the semantic correlation between two relations is highly correlated to their topological knowledge graphs.
We categorize all relation pairs into several topological patterns then propose a structure in Correlation Network (RCN) to learn the importance of the different patterns for inductive link prediction.
Experiments demonstrate that TACT can effectively model semantic correlations between relations, and significantly outperforms existing state-of-the-art methods on benchmark datasets.
arXiv Detail & Related papers (2021-03-05T13:00:10Z) - Generalized Relation Learning with Semantic Correlation Awareness for
Link Prediction [29.23338194883254]
We propose a unified Generalized Relation Learning framework to address the above two problems.
After training with GRL, the closeness of semantically similar relations in vector space and the discrimination of dissimilar relations are improved.
arXiv Detail & Related papers (2020-12-22T12:22:03Z) - Learning Relation Prototype from Unlabeled Texts for Long-tail Relation
Extraction [84.64435075778988]
We propose a general approach to learn relation prototypes from unlabeled texts.
We learn relation prototypes as an implicit factor between entities.
We conduct experiments on two publicly available datasets: New York Times and Google Distant Supervision.
arXiv Detail & Related papers (2020-11-27T06:21:12Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - Improving Long-Tail Relation Extraction with Collaborating
Relation-Augmented Attention [63.26288066935098]
We propose a novel neural network, Collaborating Relation-augmented Attention (CoRA), to handle both the wrong labeling and long-tail relations.
In the experiments on the popular benchmark dataset NYT, the proposed CoRA improves the prior state-of-the-art performance by a large margin.
arXiv Detail & Related papers (2020-10-08T05:34:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.