Unifying Post-hoc Explanations of Knowledge Graph Completions
- URL: http://arxiv.org/abs/2507.22951v1
- Date: Tue, 29 Jul 2025 13:31:48 GMT
- Title: Unifying Post-hoc Explanations of Knowledge Graph Completions
- Authors: Alessandro Lonardi, Samy Badreddine, Tarek R. Besold, Pablo Sanchez Martin,
- Abstract summary: Post-hoc explainability for Knowledge Graph Completion (KGC) lacks formalization and consistent evaluations.<n>This paper argues for a unified approach to post-hoc explainability in KGC.
- Score: 44.424583840470724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Post-hoc explainability for Knowledge Graph Completion (KGC) lacks formalization and consistent evaluations, hindering reproducibility and cross-study comparisons. This paper argues for a unified approach to post-hoc explainability in KGC. First, we propose a general framework to characterize post-hoc explanations via multi-objective optimization, balancing their effectiveness and conciseness. This unifies existing post-hoc explainability algorithms in KGC and the explanations they produce. Next, we suggest and empirically support improved evaluation protocols using popular metrics like Mean Reciprocal Rank and Hits@$k$. Finally, we stress the importance of interpretability as the ability of explanations to address queries meaningful to end-users. By unifying methods and refining evaluation standards, this work aims to make research in KGC explainability more reproducible and impactful.
Related papers
- On the Consistency of GNN Explanations for Malware Detection [2.464148828287322]
Control Flow Graphs (CFGs) are critical for analyzing program execution and characterizing malware behavior.<n>This study proposes a novel framework that dynamically constructs CFGs and embeds node features using a hybrid approach.<n>A GNN-based classifier is then constructed to detect malicious behavior from the resulting graph representations.
arXiv Detail & Related papers (2025-04-22T23:25:12Z) - KGExplainer: Towards Exploring Connected Subgraph Explanations for Knowledge Graph Completion [18.497296711526268]
We present KGExplainer, a model-agnostic method that identifies connected subgraphs and distills an evaluator to assess them quantitatively.
Experiments on benchmark datasets demonstrate that KGExplainer achieves promising improvement and achieves an optimal ratio of 83.3% in human evaluation.
arXiv Detail & Related papers (2024-04-05T05:02:12Z) - Multi-perspective Improvement of Knowledge Graph Completion with Large
Language Models [95.31941227776711]
We propose MPIKGC to compensate for the deficiency of contextualized knowledge and improve KGC by querying large language models (LLMs)
We conducted extensive evaluation of our framework based on four description-based KGC models and four datasets, for both link prediction and triplet classification tasks.
arXiv Detail & Related papers (2024-03-04T12:16:15Z) - Hierarchical Indexing for Retrieval-Augmented Opinion Summarization [60.5923941324953]
We propose a method for unsupervised abstractive opinion summarization that combines the attributability and scalability of extractive approaches with the coherence and fluency of Large Language Models (LLMs)
Our method, HIRO, learns an index structure that maps sentences to a path through a semantically organized discrete hierarchy.
At inference time, we populate the index and use it to identify and retrieve clusters of sentences containing popular opinions from input reviews.
arXiv Detail & Related papers (2024-03-01T10:38:07Z) - Faithful Knowledge Graph Explanations for Commonsense Reasoning [7.242609314791262]
Fusion of language models (LMs) and knowledge graphs (KGs) is widely used in commonsense question answering.
Current methods often overlook path decoding faithfulness, leading to divergence between graph encoder outputs and model predictions.
We identify confounding effects and LM-KG misalignment as key factors causing spurious explanations.
arXiv Detail & Related papers (2023-10-07T20:29:45Z) - The Generalizability of Explanations [0.0]
This work proposes a novel evaluation methodology from the perspective of generalizability.
We employ an Autoencoder to learn the distributions of the generated explanations and observe their learnability as well as the plausibility of the learned distributional features.
arXiv Detail & Related papers (2023-02-23T12:25:59Z) - SNaC: Coherence Error Detection for Narrative Summarization [73.48220043216087]
We introduce SNaC, a narrative coherence evaluation framework rooted in fine-grained annotations for long summaries.
We develop a taxonomy of coherence errors in generated narrative summaries and collect span-level annotations for 6.6k sentences across 150 book and movie screenplay summaries.
Our work provides the first characterization of coherence errors generated by state-of-the-art summarization models and a protocol for eliciting coherence judgments from crowd annotators.
arXiv Detail & Related papers (2022-05-19T16:01:47Z) - Coalitional Bayesian Autoencoders -- Towards explainable unsupervised
deep learning [78.60415450507706]
We show that explanations of BAE's predictions suffer from high correlation resulting in misleading explanations.
To alleviate this, a "Coalitional BAE" is proposed, which is inspired by agent-based system theory.
Our experiments on publicly available condition monitoring datasets demonstrate the improved quality of explanations using the Coalitional BAE.
arXiv Detail & Related papers (2021-10-19T15:07:09Z) - Explainable Recommendation via Interpretable Feature Mapping and
Evaluation of Explainability [22.58823484394866]
Experimental results demonstrate a strong performance in both recommendation and explaining explanation, eliminating the need for metadata.
We present a novel feature mapping approach that maps the uninterpretable general features onto the interpretable aspect features.
arXiv Detail & Related papers (2020-07-12T23:49:12Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.