KGExplainer: Towards Exploring Connected Subgraph Explanations for Knowledge Graph Completion
- URL: http://arxiv.org/abs/2404.03893v1
- Date: Fri, 5 Apr 2024 05:02:12 GMT
- Title: KGExplainer: Towards Exploring Connected Subgraph Explanations for Knowledge Graph Completion
- Authors: Tengfei Ma, Xiang song, Wen Tao, Mufei Li, Jiani Zhang, Xiaoqin Pan, Jianxin Lin, Bosheng Song, xiangxiang Zeng,
- Abstract summary: We present KGExplainer, a model-agnostic method that identifies connected subgraphs and distills an evaluator to assess them quantitatively.
Experiments on benchmark datasets demonstrate that KGExplainer achieves promising improvement and achieves an optimal ratio of 83.3% in human evaluation.
- Score: 18.497296711526268
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge graph completion (KGC) aims to alleviate the inherent incompleteness of knowledge graphs (KGs), which is a critical task for various applications, such as recommendations on the web. Although knowledge graph embedding (KGE) models have demonstrated superior predictive performance on KGC tasks, these models infer missing links in a black-box manner that lacks transparency and accountability, preventing researchers from developing accountable models. Existing KGE-based explanation methods focus on exploring key paths or isolated edges as explanations, which is information-less to reason target prediction. Additionally, the missing ground truth leads to these explanation methods being ineffective in quantitatively evaluating explored explanations. To overcome these limitations, we propose KGExplainer, a model-agnostic method that identifies connected subgraph explanations and distills an evaluator to assess them quantitatively. KGExplainer employs a perturbation-based greedy search algorithm to find key connected subgraphs as explanations within the local structure of target predictions. To evaluate the quality of the explored explanations, KGExplainer distills an evaluator from the target KGE model. By forwarding the explanations to the evaluator, our method can examine the fidelity of them. Extensive experiments on benchmark datasets demonstrate that KGExplainer yields promising improvement and achieves an optimal ratio of 83.3% in human evaluation.
Related papers
- Multi-perspective Improvement of Knowledge Graph Completion with Large
Language Models [95.31941227776711]
We propose MPIKGC to compensate for the deficiency of contextualized knowledge and improve KGC by querying large language models (LLMs)
We conducted extensive evaluation of our framework based on four description-based KGC models and four datasets, for both link prediction and triplet classification tasks.
arXiv Detail & Related papers (2024-03-04T12:16:15Z) - Negative Sampling in Knowledge Graph Representation Learning: A Review [2.6703221234079946]
Knowledge Graph Representation Learning (KGRL) is essential for AI applications like knowledge construction and information retrieval.
generating high-quality negative samples from existing knowledge graphs is challenging.
This paper systematically reviews various negative sampling (NS) methods and their contributions to the success of KGRL.
arXiv Detail & Related papers (2024-02-29T14:26:20Z) - Path-based Explanation for Knowledge Graph Completion [17.541247786437484]
Proper explanations for the results of GNN-based Knowledge Graph Completion models increase model transparency.
Existing practices for explaining KGC tasks rely on instance/subgraph-based approaches.
We propose Power-Link, the first path-based KGC explainer that explores GNN-based models.
arXiv Detail & Related papers (2024-01-04T14:19:37Z) - Anchoring Path for Inductive Relation Prediction in Knowledge Graphs [69.81600732388182]
APST takes both APs and CPs as the inputs of a unified Sentence Transformer architecture.
We evaluate APST on three public datasets and achieve state-of-the-art (SOTA) performance in 30 of 36 transductive, inductive, and few-shot experimental settings.
arXiv Detail & Related papers (2023-12-21T06:02:25Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network
Explanations [21.997015999698732]
Diverse explainability methods of graph neural networks (GNN) have been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions.
It is not clear yet how to evaluate the correctness of those explanations, whether it is from a human or a model perspective.
We propose GInX-Eval, an evaluation procedure of graph explanations that overcomes the pitfalls of faithfulness.
arXiv Detail & Related papers (2023-09-28T07:56:10Z) - Studying How to Efficiently and Effectively Guide Models with Explanations [52.498055901649025]
'Model guidance' is the idea of regularizing the models' explanations to ensure that they are "right for the right reasons"
We conduct an in-depth evaluation across various loss functions, attribution methods, models, and 'guidance depths' on the PASCAL VOC 2007 and MS COCO 2014 datasets.
Specifically, we guide the models via bounding box annotations, which are much cheaper to obtain than the commonly used segmentation masks.
arXiv Detail & Related papers (2023-03-21T15:34:50Z) - Explaining Link Predictions in Knowledge Graph Embedding Models with
Influential Examples [8.892798396214065]
We study the problem of explaining link predictions in the Knowledge Graph Embedding (KGE) models.
We propose an example-based approach that exploits the latent space representation of nodes and edges in a knowledge graph to explain predictions.
arXiv Detail & Related papers (2022-12-05T23:19:02Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding [50.010601631982425]
This paper extends the random walk model (Arora et al., 2016a) of word embeddings to Knowledge Graph Embeddings (KGEs)
We derive a scoring function that evaluates the strength of a relation R between two entities h (head) and t (tail)
We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.
arXiv Detail & Related papers (2021-01-25T13:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.