Explaining Link Predictions in Knowledge Graph Embedding Models with
Influential Examples
- URL: http://arxiv.org/abs/2212.02651v1
- Date: Mon, 5 Dec 2022 23:19:02 GMT
- Title: Explaining Link Predictions in Knowledge Graph Embedding Models with
Influential Examples
- Authors: Adrianna Janik, Luca Costabello
- Abstract summary: We study the problem of explaining link predictions in the Knowledge Graph Embedding (KGE) models.
We propose an example-based approach that exploits the latent space representation of nodes and edges in a knowledge graph to explain predictions.
- Score: 8.892798396214065
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We study the problem of explaining link predictions in the Knowledge Graph
Embedding (KGE) models. We propose an example-based approach that exploits the
latent space representation of nodes and edges in a knowledge graph to explain
predictions. We evaluated the importance of identified triples by observing
progressing degradation of model performance upon influential triples removal.
Our experiments demonstrate that this approach to generate explanations
outperforms baselines on KGE models for two publicly available datasets.
Related papers
- KGExplainer: Towards Exploring Connected Subgraph Explanations for Knowledge Graph Completion [18.497296711526268]
We present KGExplainer, a model-agnostic method that identifies connected subgraphs and distills an evaluator to assess them quantitatively.
Experiments on benchmark datasets demonstrate that KGExplainer achieves promising improvement and achieves an optimal ratio of 83.3% in human evaluation.
arXiv Detail & Related papers (2024-04-05T05:02:12Z) - Path-based Explanation for Knowledge Graph Completion [17.541247786437484]
Proper explanations for the results of GNN-based Knowledge Graph Completion models increase model transparency.
Existing practices for explaining KGC tasks rely on instance/subgraph-based approaches.
We propose Power-Link, the first path-based KGC explainer that explores GNN-based models.
arXiv Detail & Related papers (2024-01-04T14:19:37Z) - KGEx: Explaining Knowledge Graph Embeddings via Subgraph Sampling and
Knowledge Distillation [6.332573781489264]
We present KGEx, a novel method that explains individual link predictions by drawing inspiration from surrogate models research.
Given a target triple to predict, KGEx trains surrogate KGE models that we use to identify important training triples.
We conduct extensive experiments on two publicly available datasets, to demonstrate that KGEx is capable of providing explanations faithful to the black-box model.
arXiv Detail & Related papers (2023-10-02T10:20:24Z) - A Comprehensive Study on Knowledge Graph Embedding over Relational
Patterns Based on Rule Learning [49.09125100268454]
Knowledge Graph Embedding (KGE) has proven to be an effective approach to solving the Knowledge Completion Graph (KGC) task.
Relational patterns are an important factor in the performance of KGE models.
We introduce a training-free method to enhance KGE models' performance over various relational patterns.
arXiv Detail & Related papers (2023-08-15T17:30:57Z) - CausE: Towards Causal Knowledge Graph Embedding [13.016173217017597]
Knowledge graph embedding (KGE) focuses on representing the entities and relations of a knowledge graph (KG) into the continuous vector spaces.
We build the new paradigm of KGE in the context of causality and embedding disentanglement.
We propose a Causality-enhanced knowledge graph Embedding (CausE) framework.
arXiv Detail & Related papers (2023-07-21T14:25:39Z) - Repurposing Knowledge Graph Embeddings for Triple Representation via
Weak Supervision [77.34726150561087]
Current methods learn triple embeddings from scratch without utilizing entity and predicate embeddings from pre-trained models.
We develop a method for automatically sampling triples from a knowledge graph and estimating their pairwise similarities from pre-trained embedding models.
These pairwise similarity scores are then fed to a Siamese-like neural architecture to fine-tune triple representations.
arXiv Detail & Related papers (2022-08-22T14:07:08Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - Poisoning Knowledge Graph Embeddings via Relation Inference Patterns [8.793721044482613]
We study the problem of generating data poisoning attacks against Knowledge Graph Embedding (KGE) models for the task of link prediction in knowledge graphs.
To poison KGE models, we propose to exploit their inductive abilities which are captured through the relationship patterns like symmetry, inversion and composition in the knowledge graph.
arXiv Detail & Related papers (2021-11-11T17:57:37Z) - RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding [50.010601631982425]
This paper extends the random walk model (Arora et al., 2016a) of word embeddings to Knowledge Graph Embeddings (KGEs)
We derive a scoring function that evaluates the strength of a relation R between two entities h (head) and t (tail)
We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.
arXiv Detail & Related papers (2021-01-25T13:31:29Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.