Knowledge Graph Embedding for Link Prediction: A Comparative Analysis
- URL: http://arxiv.org/abs/2002.00819v4
- Date: Thu, 21 Jan 2021 21:15:36 GMT
- Title: Knowledge Graph Embedding for Link Prediction: A Comparative Analysis
- Authors: Andrea Rossi, Donatella Firmani, Antonio Matinata, Paolo Merialdo,
Denilson Barbosa
- Abstract summary: Link Prediction is a promising and widely studied task aimed at addressing Knowledge Graph incompleteness.
We experimentally compare effectiveness and efficiency of 16 state-of-the-art embedding-based LP methods, consider a rule-based baseline, and report detailed analysis over the most popular benchmarks in the literature.
- Score: 9.57564539646078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graphs (KGs) have found many applications in industry and academic
settings, which in turn, have motivated considerable research efforts towards
large-scale information extraction from a variety of sources. Despite such
efforts, it is well known that even state-of-the-art KGs suffer from
incompleteness. Link Prediction (LP), the task of predicting missing facts
among entities already a KG, is a promising and widely studied task aimed at
addressing KG incompleteness. Among the recent LP techniques, those based on KG
embeddings have achieved very promising performances in some benchmarks.
Despite the fast growing literature in the subject, insufficient attention has
been paid to the effect of the various design choices in those methods.
Moreover, the standard practice in this area is to report accuracy by
aggregating over a large number of test facts in which some entities are
over-represented; this allows LP methods to exhibit good performance by just
attending to structural properties that include such entities, while ignoring
the remaining majority of the KG. This analysis provides a comprehensive
comparison of embedding-based LP methods, extending the dimensions of analysis
beyond what is commonly available in the literature. We experimentally compare
effectiveness and efficiency of 16 state-of-the-art methods, consider a
rule-based baseline, and report detailed analysis over the most popular
benchmarks in the literature.
Related papers
- Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency [59.6772484292295]
Knowledge graphs (KGs) generated by large language models (LLMs) are increasingly valuable for Retrieval-Augmented Generation (RAG) applications.
Existing KG extraction methods rely on prompt-based approaches, which are inefficient for processing large-scale corpora.
We propose SynthKG, a multi-step, document-level synthesis KG workflow based on LLMs.
We also design a novel graph-based retrieval framework for RAG.
arXiv Detail & Related papers (2024-10-22T00:47:54Z) - Less is More: One-shot Subgraph Reasoning on Large-scale Knowledge Graphs [49.547988001231424]
We propose the one-shot-subgraph link prediction to achieve efficient and adaptive prediction.
Design principle is that, instead of directly acting on the whole KG, the prediction procedure is decoupled into two steps.
We achieve promoted efficiency and leading performances on five large-scale benchmarks.
arXiv Detail & Related papers (2024-03-15T12:00:12Z) - Multi-perspective Improvement of Knowledge Graph Completion with Large
Language Models [95.31941227776711]
We propose MPIKGC to compensate for the deficiency of contextualized knowledge and improve KGC by querying large language models (LLMs)
We conducted extensive evaluation of our framework based on four description-based KGC models and four datasets, for both link prediction and triplet classification tasks.
arXiv Detail & Related papers (2024-03-04T12:16:15Z) - Beyond Transduction: A Survey on Inductive, Few Shot, and Zero Shot Link
Prediction in Knowledge Graphs [2.1485350418225244]
Knowledge graphs (KGs) comprise entities interconnected by relations of different semantic meanings.
They inherently suffer from incompleteness, i.e. entities or facts about entities are missing.
A larger body of works focuses on the completion of missing information in KGs, which is commonly referred to as link prediction (LP)
arXiv Detail & Related papers (2023-12-08T12:13:40Z) - Pre-trained Embeddings for Entity Resolution: An Experimental Analysis
[Experiment, Analysis & Benchmark] [65.11858854040544]
We perform a thorough experimental analysis of 12 popular language models over 17 established benchmark datasets.
First, we assess their vectorization overhead for converting all input entities into dense embeddings vectors.
Second, we investigate their blocking performance, performing a detailed scalability analysis, and comparing them with the state-of-the-art deep learning-based blocking method.
Third, we conclude with their relative performance for both supervised and unsupervised matching.
arXiv Detail & Related papers (2023-04-24T08:53:54Z) - Reasoning over Multi-view Knowledge Graphs [59.99051368907095]
ROMA is a novel framework for answering logical queries over multi-view KGs.
It scales up to KGs of large sizes (e.g., millions of facts) and fine-granular views.
It generalizes to query structures and KG views that are unobserved during training.
arXiv Detail & Related papers (2022-09-27T21:32:20Z) - KG-NSF: Knowledge Graph Completion with a Negative-Sample-Free Approach [4.146672630717471]
We propose bftextKG-NSF, a negative sampling-free framework for learning KG embeddings based on the cross-correlation matrices of embedding vectors.
It is shown that the proposed method achieves comparable link prediction performance to negative sampling-based methods while converging much faster.
arXiv Detail & Related papers (2022-07-29T11:39:04Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - Knowledge Graph Embedding Methods for Entity Alignment: An Experimental
Review [7.241438112282638]
We conduct the first meta-level analysis of popular embedding methods for entity alignment.
Our analysis reveals statistically significant correlations of different embedding methods with various meta-features extracted by KGs.
We rank them in a statistically significant way according to their effectiveness across all real-world KGs of our testbed.
arXiv Detail & Related papers (2022-03-17T12:11:58Z) - Efficient Knowledge Graph Validation via Cross-Graph Representation
Learning [40.570585195713704]
noisy facts are unavoidably introduced into Knowledge Graphs that could be caused by automatic extraction.
We propose a cross-graph representation learning framework, i.e., CrossVal, which can leverage an external KG to validate the facts in the target KG efficiently.
arXiv Detail & Related papers (2020-08-16T20:51:17Z) - IterefinE: Iterative KG Refinement Embeddings using Symbolic Knowledge [10.689559910656474]
Knowledge Graphs (KGs) extracted from text sources are often noisy and lead to poor performance in downstream application tasks such as KG-based question answering.
Most successful techniques for KG refinement make use of inference rules and reasoning oversupervised.
In this paper, we present a KG refinement framework called IterefinE which iteratively combines the two techniques.
arXiv Detail & Related papers (2020-06-03T14:05:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.