Community Detection on Model Explanation Graphs for Explainable AI
- URL: http://arxiv.org/abs/2510.27655v1
- Date: Fri, 31 Oct 2025 17:27:56 GMT
- Title: Community Detection on Model Explanation Graphs for Explainable AI
- Authors: Ehsan Moradi,
- Abstract summary: Module of Influence (MoI) constructs a model explanation graph from per-instance attributions.<n>MoI applies community detection to find feature modules that jointly affect predictions, and quantifies how these modules relate to bias, redundancy, and causality patterns.<n>We release stability and synergy metrics, a reference implementation, and evaluation protocols to benchmark module discovery in XAI.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature-attribution methods (e.g., SHAP, LIME) explain individual predictions but often miss higher-order structure: sets of features that act in concert. We propose Modules of Influence (MoI), a framework that (i) constructs a model explanation graph from per-instance attributions, (ii) applies community detection to find feature modules that jointly affect predictions, and (iii) quantifies how these modules relate to bias, redundancy, and causality patterns. Across synthetic and real datasets, MoI uncovers correlated feature groups, improves model debugging via module-level ablations, and localizes bias exposure to specific modules. We release stability and synergy metrics, a reference implementation, and evaluation protocols to benchmark module discovery in XAI.
Related papers
- A Causal Adjustment Module for Debiasing Scene Graph Generation [28.44150555570101]
We employ causal inference techniques to model the causality among skewed distributions.<n>Our method enables the composition of zero-shot relationships, thereby enhancing the model's ability to recognize such relationships.
arXiv Detail & Related papers (2025-03-22T20:44:01Z) - Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.<n>Yet their widespread adoption poses challenges regarding data attribution and interpretability.<n>We develop an influence functions framework to address these challenges.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - A Plug-and-Play Method for Rare Human-Object Interactions Detection by Bridging Domain Gap [50.079224604394]
We present a novel model-agnostic framework called textbfContext-textbfEnhanced textbfFeature textbfAment (CEFA)
CEFA consists of a feature alignment module and a context enhancement module.
Our method can serve as a plug-and-play module to improve the detection performance of HOI models on rare categories.
arXiv Detail & Related papers (2024-07-31T08:42:48Z) - Graph-based Unsupervised Disentangled Representation Learning via Multimodal Large Language Models [42.17166746027585]
We introduce a bidirectional weighted graph-based framework to learn factorized attributes and their interrelations within complex data.
Specifically, we propose a $beta$-VAE based module to extract factors as the initial nodes of the graph.
By integrating these complementary modules, our model successfully achieves fine-grained, practical and unsupervised disentanglement.
arXiv Detail & Related papers (2024-07-26T15:32:21Z) - Decomposing and Editing Predictions by Modeling Model Computation [75.37535202884463]
We introduce a task called component modeling.
The goal of component modeling is to decompose an ML model's prediction in terms of its components.
We present COAR, a scalable algorithm for estimating component attributions.
arXiv Detail & Related papers (2024-04-17T16:28:08Z) - Variable Importance Matching for Causal Inference [73.25504313552516]
We describe a general framework called Model-to-Match that achieves these goals.
Model-to-Match uses variable importance measurements to construct a distance metric.
We operationalize the Model-to-Match framework with LASSO.
arXiv Detail & Related papers (2023-02-23T00:43:03Z) - Meta-Causal Feature Learning for Out-of-Distribution Generalization [71.38239243414091]
This paper presents a balanced meta-causal learner (BMCL), which includes a balanced task generation module (BTG) and a meta-causal feature learning module (MCFL)
BMCL effectively identifies the class-invariant visual regions for classification and may serve as a general framework to improve the performance of the state-of-the-art methods.
arXiv Detail & Related papers (2022-08-22T09:07:02Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Towards Robust and Adaptive Motion Forecasting: A Causal Representation
Perspective [72.55093886515824]
We introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables.
We devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph.
Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations.
arXiv Detail & Related papers (2021-11-29T18:59:09Z) - Evaluating Modules in Graph Contrastive Learning [29.03038320344791]
We propose a framework that decomposes graph contrastive learning models into four modules.
We conduct experiments on node and graph classification tasks.
We release our implementations and results as OpenGCL, a modularized toolkit.
arXiv Detail & Related papers (2021-06-15T14:14:23Z) - Semi-Modular Inference: enhanced learning in multi-modular models by
tempering the influence of components [0.0]
We show existing Modular/Cut-model inference is coherent, and write down a new family of Semi-Modular Inference schemes.
We give a meta-learning criterion and estimation procedure to choose the inference scheme.
We illustrate our methods on two standard test cases from the literature and a motivating archaeological data set.
arXiv Detail & Related papers (2020-03-15T11:55:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.