Relation-Aware Network with Attention-Based Loss for Few-Shot Knowledge
Graph Completion
- URL: http://arxiv.org/abs/2306.09519v1
- Date: Thu, 15 Jun 2023 21:41:43 GMT
- Title: Relation-Aware Network with Attention-Based Loss for Few-Shot Knowledge
Graph Completion
- Authors: Qiao Qiao, Yuepei Li, Kang Zhou, Qi Li
- Abstract summary: Current approaches randomly select one negative sample for each reference entity pair to minimize a margin-based ranking loss.
We propose a novel Relation-Aware Network with Attention-Based Loss framework.
Experiments demonstrate that RANA outperforms the state-of-the-art models on two benchmark datasets.
- Score: 9.181270251524866
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot knowledge graph completion (FKGC) task aims to predict unseen facts
of a relation with few-shot reference entity pairs. Current approaches randomly
select one negative sample for each reference entity pair to minimize a
margin-based ranking loss, which easily leads to a zero-loss problem if the
negative sample is far away from the positive sample and then out of the
margin. Moreover, the entity should have a different representation under a
different context. To tackle these issues, we propose a novel Relation-Aware
Network with Attention-Based Loss (RANA) framework. Specifically, to better
utilize the plentiful negative samples and alleviate the zero-loss issue, we
strategically select relevant negative samples and design an attention-based
loss function to further differentiate the importance of each negative sample.
The intuition is that negative samples more similar to positive samples will
contribute more to the model. Further, we design a dynamic relation-aware
entity encoder for learning a context-dependent entity representation.
Experiments demonstrate that RANA outperforms the state-of-the-art models on
two benchmark datasets.
Related papers
- Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls
and New Benchmarking [66.83273589348758]
Link prediction attempts to predict whether an unseen edge exists based on only a portion of edges of a graph.
A flurry of methods have been introduced in recent years that attempt to make use of graph neural networks (GNNs) for this task.
New and diverse datasets have also been created to better evaluate the effectiveness of these new models.
arXiv Detail & Related papers (2023-06-18T01:58:59Z) - Prototypical Graph Contrastive Learning [141.30842113683775]
We propose a Prototypical Graph Contrastive Learning (PGCL) approach to mitigate the critical sampling bias issue.
Specifically, PGCL models the underlying semantic structure of the graph data via clustering semantically similar graphs into the same group, and simultaneously encourages the clustering consistency for different augmentations of the same graph.
For a query, PGCL further reweights its negative samples based on the distance between their prototypes (cluster centroids) and the query prototype.
arXiv Detail & Related papers (2021-06-17T16:45:31Z) - Rethinking InfoNCE: How Many Negative Samples Do You Need? [54.146208195806636]
We study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework.
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
arXiv Detail & Related papers (2021-05-27T08:38:29Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Relation-aware Graph Attention Model With Adaptive Self-adversarial
Training [29.240686573485718]
This paper describes an end-to-end solution for the relationship prediction task in heterogeneous, multi-relational graphs.
We particularly address two building blocks in the pipeline, namely heterogeneous graph representation learning and negative sampling.
We introduce a parameter-free negative sampling technique -- adaptive self-adversarial (ASA) negative sampling.
arXiv Detail & Related papers (2021-02-14T16:11:56Z) - Conditional Negative Sampling for Contrastive Learning of Visual
Representations [19.136685699971864]
We show that choosing difficult negatives, or those more similar to the current instance, can yield stronger representations.
We introduce a family of mutual information estimators that sample negatives conditionally -- in a "ring" around each positive.
We prove that these estimators lower-bound mutual information, with higher bias but lower variance than NCE.
arXiv Detail & Related papers (2020-10-05T14:17:32Z) - EqCo: Equivalent Rules for Self-supervised Contrastive Learning [81.45848885547754]
We propose a method to make self-supervised learning irrelevant to the number of negative samples in InfoNCE-based contrastive learning frameworks.
Inspired by the InfoMax principle, we point that the margin term in contrastive loss needs to be adaptively scaled according to the number of negative pairs.
arXiv Detail & Related papers (2020-10-05T11:39:04Z) - Structure Aware Negative Sampling in Knowledge Graphs [18.885368822313254]
A crucial aspect of contrastive learning approaches is the choice of corruption distribution that generates hard negative samples.
We propose Structure Aware Negative Sampling (SANS), an inexpensive negative sampling strategy that utilizes the rich graph structure by selecting negative samples from a node's k-hop neighborhood.
arXiv Detail & Related papers (2020-09-23T19:57:00Z) - SCE: Scalable Network Embedding from Sparsest Cut [20.08464038805681]
Large-scale network embedding is to learn a latent representation for each node in an unsupervised manner.
A key of success to such contrastive learning methods is how to draw positive and negative samples.
In this paper, we propose SCE for unsupervised network embedding only using negative samples for training.
arXiv Detail & Related papers (2020-06-30T03:18:15Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.