VARSHAP: Addressing Global Dependency Problems in Explainable AI with Variance-Based Local Feature Attribution
- URL: http://arxiv.org/abs/2506.07229v1
- Date: Sun, 08 Jun 2025 17:26:47 GMT
- Title: VARSHAP: Addressing Global Dependency Problems in Explainable AI with Variance-Based Local Feature Attribution
- Authors: Mateusz Gajewski, MikoĊaj Morzy, Adam Karczmarz, Piotr Sankowski,
- Abstract summary: Existing feature attribution methods like SHAP often suffer from global dependence, failing to capture true local model behavior.<n>This paper introduces VARSHAP, a novel model-agnostic local feature attribution method which uses the reduction of prediction variance as the key importance metric of features.
- Score: 3.545940115969205
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing feature attribution methods like SHAP often suffer from global dependence, failing to capture true local model behavior. This paper introduces VARSHAP, a novel model-agnostic local feature attribution method which uses the reduction of prediction variance as the key importance metric of features. Building upon Shapley value framework, VARSHAP satisfies the key Shapley axioms, but, unlike SHAP, is resilient to global data distribution shifts. Experiments on synthetic and real-world datasets demonstrate that VARSHAP outperforms popular methods such as KernelSHAP or LIME, both quantitatively and qualitatively.
Related papers
- Linguistic Fuzzy Information Evolution with Random Leader Election Mechanism for Decision-Making Systems [58.67035332062508]
Linguistic fuzzy information evolution is crucial in understanding information exchange among agents.
Different agent weights may lead to different convergence results in the classic DeGroot model.
This paper proposes three new models of linguistic fuzzy information dynamics.
arXiv Detail & Related papers (2024-10-19T18:15:24Z) - An Efficient Framework for Crediting Data Contributors of Diffusion Models [13.761241561734547]
We introduce a method to efficiently retrain and rerun inference for Shapley value estimation.<n>We evaluate the utility of our method with three use cases: (i) image quality for a DDPM trained on a CIFAR dataset, (ii) demographic diversity for an LDM trained on CelebA-HQ, and (iii) aesthetic quality for a Stable Diffusion model LoRA-finetuned on Post-Impressionist artworks.
arXiv Detail & Related papers (2024-06-09T17:42:09Z) - Adaptive Global-Local Representation Learning and Selection for
Cross-Domain Facial Expression Recognition [54.334773598942775]
Domain shift poses a significant challenge in Cross-Domain Facial Expression Recognition (CD-FER)
We propose an Adaptive Global-Local Representation Learning and Selection framework.
arXiv Detail & Related papers (2024-01-20T02:21:41Z) - Feature-Distribution Perturbation and Calibration for Generalized Person
ReID [47.84576229286398]
Person Re-identification (ReID) has been advanced remarkably over the last 10 years along with the rapid development of deep learning for visual recognition.
We propose a Feature-Distribution Perturbation and generalization (PECA) method to derive generic feature representations for person ReID.
arXiv Detail & Related papers (2022-05-23T11:06:12Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - Self-balanced Learning For Domain Generalization [64.99791119112503]
Domain generalization aims to learn a prediction model on multi-domain source data such that the model can generalize to a target domain with unknown statistics.
Most existing approaches have been developed under the assumption that the source data is well-balanced in terms of both domain and class.
We propose a self-balanced domain generalization framework that adaptively learns the weights of losses to alleviate the bias caused by different distributions of the multi-domain source data.
arXiv Detail & Related papers (2021-08-31T03:17:54Z) - Data-driven advice for interpreting local and global model predictions
in bioinformatics problems [17.685881417954782]
Conditional feature contributions (CFCs) provide textitlocal, case-by-case explanations of a prediction.
We compare the explanations computed by both methods on a set of 164 publicly available classification problems.
For random forests, we find extremely high similarities and correlations of both local and global SHAP values and CFC scores.
arXiv Detail & Related papers (2021-08-13T12:41:39Z) - On Locality of Local Explanation Models [0.43012765978447565]
We consider the formulation of neighbourhood reference distributions that improve the local interpretability of Shapley values.
We observe that Neighbourhood Shapley values identify meaningful sparse feature relevance attributions that provide insight into local model behaviour.
arXiv Detail & Related papers (2021-06-24T16:20:38Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.