LinkLogic: A New Method and Benchmark for Explainable Knowledge Graph Predictions
- URL: http://arxiv.org/abs/2406.00855v1
- Date: Sun, 2 Jun 2024 20:22:22 GMT
- Title: LinkLogic: A New Method and Benchmark for Explainable Knowledge Graph Predictions
- Authors: Niraj Kumar-Singh, Gustavo Polleti, Saee Paliwal, Rachel Hodos-Nkhereanye,
- Abstract summary: We present an in-depth exploration of a simple link prediction explanation method we call LinkLogic.
We construct the first-ever link prediction explanation benchmark, based on family structures present in the FB13 dataset.
- Score: 0.5999777817331317
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While there are a plethora of methods for link prediction in knowledge graphs, state-of-the-art approaches are often black box, obfuscating model reasoning and thereby limiting the ability of users to make informed decisions about model predictions. Recently, methods have emerged to generate prediction explanations for Knowledge Graph Embedding models, a widely-used class of methods for link prediction. The question then becomes, how well do these explanation systems work? To date this has generally been addressed anecdotally, or through time-consuming user research. In this work, we present an in-depth exploration of a simple link prediction explanation method we call LinkLogic, that surfaces and ranks explanatory information used for the prediction. Importantly, we construct the first-ever link prediction explanation benchmark, based on family structures present in the FB13 dataset. We demonstrate the use of this benchmark as a rich evaluation sandbox, probing LinkLogic quantitatively and qualitatively to assess the fidelity, selectivity and relevance of the generated explanations. We hope our work paves the way for more holistic and empirical assessment of knowledge graph prediction explanation methods in the future.
Related papers
- Improving rule mining via embedding-based link prediction [2.422410293747519]
Rule mining on knowledge graphs allows for explainable link prediction.
Several approaches combining the two families have been proposed in recent years.
We propose a new way to combine the two families of approaches.
arXiv Detail & Related papers (2024-06-14T15:53:30Z) - Evaluating Link Prediction Explanations for Graph Neural Networks [0.0]
We provide metrics to assess the quality of link prediction explanations, with or without ground-truth.
We discuss how underlying assumptions and technical details specific to the link prediction task, such as the choice of distance between node embeddings, can influence the quality of the explanations.
arXiv Detail & Related papers (2023-08-03T10:48:37Z) - Towards Few-shot Inductive Link Prediction on Knowledge Graphs: A
Relational Anonymous Walk-guided Neural Process Approach [49.00753238429618]
Few-shot inductive link prediction on knowledge graphs aims to predict missing links for unseen entities with few-shot links observed.
Recent inductive methods utilize the sub-graphs around unseen entities to obtain the semantics and predict links inductively.
We propose a novel relational anonymous walk-guided neural process for few-shot inductive link prediction on knowledge graphs, denoted as RawNP.
arXiv Detail & Related papers (2023-06-26T12:02:32Z) - UKP-SQuARE v2 Explainability and Adversarial Attacks for Trustworthy QA [47.8796570442486]
Question Answering systems are increasingly deployed in applications where they support real-world decisions.
Inherently interpretable models or post hoc explainability methods can help users to comprehend how a model arrives at its prediction.
We introduce SQuARE v2, the new version of SQuARE, to provide an explainability infrastructure for comparing models.
arXiv Detail & Related papers (2022-08-19T13:01:01Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - KGRefiner: Knowledge Graph Refinement for Improving Accuracy of
Translational Link Prediction Methods [4.726777092009553]
This paper proposes a method for refining the knowledge graph.
It makes the knowledge graph more informative, and link prediction operations can be performed more accurately.
Our experiments show that our method can significantly increase the performance of translational link prediction methods.
arXiv Detail & Related papers (2021-06-27T13:32:39Z) - ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning [65.15423587105472]
We present a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.
Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance.
A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths.
arXiv Detail & Related papers (2021-04-15T17:51:36Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Exploring the Limits of Few-Shot Link Prediction in Knowledge Graphs [49.6661602019124]
We study a spectrum of models derived by generalizing the current state of the art for few-shot link prediction.
We find that a simple zero-shot baseline - which ignores any relation-specific information - achieves surprisingly strong performance.
Experiments on carefully crafted synthetic datasets show that having only a few examples of a relation fundamentally limits models from using fine-grained structural information.
arXiv Detail & Related papers (2021-02-05T21:04:31Z) - xERTE: Explainable Reasoning on Temporal Knowledge Graphs for
Forecasting Future Links [21.848948946837844]
This paper provides a link forecasting framework that reasons over query-relevant subgraphs of temporal KGs.
We propose a temporal relational attention mechanism and a novel reverse representation update scheme to guide the extraction of an enclosing subgraph.
Our approach provides human-understandable evidence explaining the forecast.
arXiv Detail & Related papers (2020-12-31T10:41:01Z) - Explainable Artificial Intelligence: How Subsets of the Training Data
Affect a Prediction [2.3204178451683264]
We propose a novel methodology which we call Shapley values for training data subset importance.
We show how the proposed explanations can be used to reveal biasedness in models and erroneous training data.
We argue that the explanations enable us to perceive more of the inner workings of the algorithms, and illustrate how models producing similar predictions can be based on very different parts of the training data.
arXiv Detail & Related papers (2020-12-07T12:15:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.