HaSa: Hardness and Structure-Aware Contrastive Knowledge Graph Embedding
- URL: http://arxiv.org/abs/2305.10563v2
- Date: Sun, 15 Oct 2023 02:15:41 GMT
- Title: HaSa: Hardness and Structure-Aware Contrastive Knowledge Graph Embedding
- Authors: Honggen Zhang, June Zhang, Igor Molybog
- Abstract summary: We consider a contrastive learning approach to knowledge graph embedding (KGE) via InfoNCE.
We argue that the generation of high-quality (i.e., hard) negative triples might lead to an increase in false negative triples.
To mitigate the impact of false negative triples during the generation of hard negative triples, we propose the Hardness and Structure-aware (textbfHaSa) contrastive KGE method.
- Score: 2.395887395376882
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a contrastive learning approach to knowledge graph embedding
(KGE) via InfoNCE. For KGE, efficient learning relies on augmenting the
training data with negative triples. However, most KGE works overlook the bias
from generating the negative triples-false negative triples (factual triples
missing from the knowledge graph). We argue that the generation of high-quality
(i.e., hard) negative triples might lead to an increase in false negative
triples. To mitigate the impact of false negative triples during the generation
of hard negative triples, we propose the Hardness and Structure-aware
(\textbf{HaSa}) contrastive KGE method, which alleviates the effect of false
negative triples while generating the hard negative triples. Experiments show
that HaSa improves the performance of InfoNCE-based KGE approaches and achieves
state-of-the-art results in several metrics for WN18RR datasets and competitive
results for FB15k-237 datasets compared to both classic and pre-trained
LM-based KGE methods.
Related papers
- Untargeted Adversarial Attack on Knowledge Graph Embeddings [18.715565468700227]
Knowledge graph embedding (KGE) methods have achieved great success in handling various knowledge graph (KG) downstream tasks.
Some recent studies propose adversarial attacks to investigate the vulnerabilities of KGE methods, but their attackers are target-oriented with the KGE method.
In this work, we explore untargeted attacks with the aim of reducing the global performances of KGE methods over a set of unknown test triples.
arXiv Detail & Related papers (2024-05-08T18:08:11Z) - Negative Sampling with Adaptive Denoising Mixup for Knowledge Graph
Embedding [36.24764655034505]
Knowledge graph embedding (KGE) aims to map entities and relations of a knowledge graph (KG) into a low-dimensional and dense vector space via contrasting the positive and negative triples.
Negative sampling is essential to find high-quality negative triples since KGs only contain positive triples.
Most existing negative sampling methods assume that non-existent triples with high scores are high-quality negative triples.
arXiv Detail & Related papers (2023-10-15T09:01:24Z) - KGEx: Explaining Knowledge Graph Embeddings via Subgraph Sampling and
Knowledge Distillation [6.332573781489264]
We present KGEx, a novel method that explains individual link predictions by drawing inspiration from surrogate models research.
Given a target triple to predict, KGEx trains surrogate KGE models that we use to identify important training triples.
We conduct extensive experiments on two publicly available datasets, to demonstrate that KGEx is capable of providing explanations faithful to the black-box model.
arXiv Detail & Related papers (2023-10-02T10:20:24Z) - Your Negative May not Be True Negative: Boosting Image-Text Matching
with False Negative Elimination [62.18768931714238]
We propose a novel False Negative Elimination (FNE) strategy to select negatives via sampling.
The results demonstrate the superiority of our proposed false negative elimination strategy.
arXiv Detail & Related papers (2023-08-08T16:31:43Z) - Repurposing Knowledge Graph Embeddings for Triple Representation via
Weak Supervision [77.34726150561087]
Current methods learn triple embeddings from scratch without utilizing entity and predicate embeddings from pre-trained models.
We develop a method for automatically sampling triples from a knowledge graph and estimating their pairwise similarities from pre-trained embedding models.
These pairwise similarity scores are then fed to a Siamese-like neural architecture to fine-tune triple representations.
arXiv Detail & Related papers (2022-08-22T14:07:08Z) - MixKG: Mixing for harder negative samples in knowledge graph [33.4379457065033]
Knowledge graph embedding(KGE) aims to represent entities and relations into low-dimensional vectors for many real-world applications.
We introduce an inexpensive but effective method called MixKG to generate harder negative samples for knowledge graphs.
Experiments on two public datasets and four classical KGE methods show MixKG is superior to previous negative sampling algorithms.
arXiv Detail & Related papers (2022-02-19T13:31:06Z) - KGE-CL: Contrastive Learning of Knowledge Graph Embeddings [64.67579344758214]
We propose a simple yet efficient contrastive learning framework for knowledge graph embeddings.
It can shorten the semantic distance of the related entities and entity-relation couples in different triples.
It can yield some new state-of-the-art results, achieving 51.2% MRR, 46.8% Hits@1 on the WN18RR dataset, and 59.1% MRR, 51.8% Hits@1 on the YAGO3-10 dataset.
arXiv Detail & Related papers (2021-12-09T12:45:33Z) - How Does Knowledge Graph Embedding Extrapolate to Unseen Data: a
Semantic Evidence View [13.575052133743505]
We study how does Knowledge Graph Embedding (KGE) extrapolate to unseen data.
We also propose a novel GNN-based KGE model, called Semantic Evidence aware Graph Neural Network (SE-GNN)
arXiv Detail & Related papers (2021-09-24T08:17:02Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z) - RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding [50.010601631982425]
This paper extends the random walk model (Arora et al., 2016a) of word embeddings to Knowledge Graph Embeddings (KGEs)
We derive a scoring function that evaluates the strength of a relation R between two entities h (head) and t (tail)
We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.
arXiv Detail & Related papers (2021-01-25T13:31:29Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.