Link Prediction on N-ary Relational Data Based on Relatedness Evaluation
- URL: http://arxiv.org/abs/2104.10424v1
- Date: Wed, 21 Apr 2021 09:06:54 GMT
- Title: Link Prediction on N-ary Relational Data Based on Relatedness Evaluation
- Authors: Saiping Guan, Xiaolong Jin, Jiafeng Guo, Yuanzhuo Wang, Xueqi Cheng
- Abstract summary: We propose a method called NaLP to conduct link prediction on n-ary relational data.
We represent each n-ary relational fact as a set of its role and role-value pairs.
Experimental results validate the effectiveness and merits of the proposed methods.
- Score: 61.61555159755858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the overwhelming popularity of Knowledge Graphs (KGs), researchers have
poured attention to link prediction to fill in missing facts for a long time.
However, they mainly focus on link prediction on binary relational data, where
facts are usually represented as triples in the form of (head entity, relation,
tail entity). In practice, n-ary relational facts are also ubiquitous. When
encountering such facts, existing studies usually decompose them into triples
by introducing a multitude of auxiliary virtual entities and additional
triples. These conversions result in the complexity of carrying out link
prediction on n-ary relational data. It has even proven that they may cause
loss of structure information. To overcome these problems, in this paper, we
represent each n-ary relational fact as a set of its role and role-value pairs.
We then propose a method called NaLP to conduct link prediction on n-ary
relational data, which explicitly models the relatedness of all the role and
role-value pairs in an n-ary relational fact. We further extend NaLP by
introducing type constraints of roles and role-values without any external
type-specific supervision, and proposing a more reasonable negative sampling
mechanism. Experimental results validate the effectiveness and merits of the
proposed methods.
Related papers
- Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals [91.59906995214209]
We propose a new evaluation method, Counterfactual Attentiveness Test (CAT)
CAT uses counterfactuals by replacing part of the input with its counterpart from a different example, expecting an attentive model to change its prediction.
We show that GPT3 becomes less attentive with an increased number of demonstrations, while its accuracy on the test data improves.
arXiv Detail & Related papers (2023-11-16T06:27:35Z) - Revisiting Link Prediction: A Data Perspective [59.296773787387224]
Link prediction, a fundamental task on graphs, has proven indispensable in various applications, e.g., friend recommendation, protein analysis, and drug interaction prediction.
Evidence in existing literature underscores the absence of a universally best algorithm suitable for all datasets.
We recognize three fundamental factors critical to link prediction: local structural proximity, global structural proximity, and feature proximity.
arXiv Detail & Related papers (2023-10-01T21:09:59Z) - Multiple Relations Classification using Imbalanced Predictions
Adaptation [0.0]
The relation classification task assigns the proper semantic relation to a pair of subject and object entities.
Current relation classification models employ additional procedures to identify multiple relations in a single sentence.
We propose a multiple relations classification model that tackles these issues through a customized output architecture and by exploiting additional input features.
arXiv Detail & Related papers (2023-09-24T18:36:22Z) - Few-shot Link Prediction on N-ary Facts [70.8150181683017]
Link Prediction on Hyper-relational Facts (LPHFs) is to predict a missing element in a hyper-relational fact.
Few-Shot Link Prediction on Hyper-relational Facts (PHFs) aims to predict a missing entity in a hyper-relational fact with limited support instances.
arXiv Detail & Related papers (2023-05-10T12:44:00Z) - Relation-dependent Contrastive Learning with Cluster Sampling for
Inductive Relation Prediction [30.404149577013595]
We introduce Relation-dependent Contrastive Learning (ReCoLe) for inductive relation prediction.
GNN-based encoder is optimized by contrastive learning, which ensures satisfactory performance on long-tail relations.
Experimental results suggest that ReCoLe outperforms state-of-the-art methods on commonly used inductive datasets.
arXiv Detail & Related papers (2022-11-22T13:30:49Z) - A Simple yet Effective Relation Information Guided Approach for Few-Shot
Relation Extraction [22.60428265210431]
Few-Shot Relation Extraction aims at predicting the relation for a pair of entities in a sentence by training with a few labelled examples in each relation.
Some recent works have introduced relation information to assist model learning based on Prototype Network.
We argue that relation information can be introduced more explicitly and effectively into the model.
arXiv Detail & Related papers (2022-05-19T13:03:01Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Type-augmented Relation Prediction in Knowledge Graphs [65.88395564516115]
We propose a type-augmented relation prediction (TaRP) method, where we apply both the type information and instance-level information for relation prediction.
Our proposed TaRP method achieves significantly better performance than state-of-the-art methods on four benchmark datasets.
arXiv Detail & Related papers (2020-09-16T21:14:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.