Explaining the Uncertain: Stochastic Shapley Values for Gaussian Process
Models
- URL: http://arxiv.org/abs/2305.15167v1
- Date: Wed, 24 May 2023 13:59:03 GMT
- Title: Explaining the Uncertain: Stochastic Shapley Values for Gaussian Process
Models
- Authors: Siu Lun Chau and Krikamol Muandet and Dino Sejdinovic
- Abstract summary: We present a novel approach for explaining Gaussian processes (GPs) that can utilize the full analytical covariance structure in GPs.
Our method is based on the popular solution concept of Shapley values extended to cooperative games, resulting in explanations that are random variables.
The GP explanations generated using our approach satisfy similar axioms to standard Shapley values and possess a tractable covariance function across features and data observations.
- Score: 15.715453687736028
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel approach for explaining Gaussian processes (GPs) that can
utilize the full analytical covariance structure present in GPs. Our method is
based on the popular solution concept of Shapley values extended to stochastic
cooperative games, resulting in explanations that are random variables. The GP
explanations generated using our approach satisfy similar favorable axioms to
standard Shapley values and possess a tractable covariance function across
features and data observations. This covariance allows for quantifying
explanation uncertainties and studying the statistical dependencies between
explanations. We further extend our framework to the problem of predictive
explanation, and propose a Shapley prior over the explanation function to
predict Shapley values for new data based on previously computed ones. Our
extensive illustrations demonstrate the effectiveness of the proposed approach.
Related papers
- Model-free Methods for Event History Analysis and Efficient Adjustment (PhD Thesis) [55.2480439325792]
This thesis is a series of independent contributions to statistics unified by a model-free perspective.
The first chapter elaborates on how a model-free perspective can be used to formulate flexible methods that leverage prediction techniques from machine learning.
The second chapter studies the concept of local independence, which describes whether the evolution of one process is directly influenced by another.
arXiv Detail & Related papers (2025-02-11T19:24:09Z) - Improving the Sampling Strategy in KernelSHAP [0.8057006406834466]
KernelSHAP framework enables us to approximate the Shapley values using a sampled subset of weighted conditional expectations.
We propose three main novel contributions: a stabilizing technique to reduce the variance of the weights in the current state-of-the-art strategy, a novel weighing scheme that corrects the Shapley kernel weights based on sampled subsets, and a straightforward strategy that includes the important subsets and integrates them with the corrected Shapley kernel weights.
arXiv Detail & Related papers (2024-10-07T10:02:31Z) - A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - Counting Phases and Faces Using Bayesian Thermodynamic Integration [77.34726150561087]
We introduce a new approach to reconstruction of the thermodynamic functions and phase boundaries in two-parametric statistical mechanics systems.
We use the proposed approach to accurately reconstruct the partition functions and phase diagrams of the Ising model and the exactly solvable non-equilibrium TASEP.
arXiv Detail & Related papers (2022-05-18T17:11:23Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Recursive Monte Carlo and Variational Inference with Auxiliary Variables [64.25762042361839]
Recursive auxiliary-variable inference (RAVI) is a new framework for exploiting flexible proposals.
RAVI generalizes and unifies several existing methods for inference with expressive expressive families.
We show RAVI's design framework and theorems by using them to analyze and improve upon Salimans et al.'s Markov Chain Variational Inference.
arXiv Detail & Related papers (2022-03-05T23:52:40Z) - An Imprecise SHAP as a Tool for Explaining the Class Probability
Distributions under Limited Training Data [5.8010446129208155]
An imprecise SHAP is proposed for cases when the class probability distributions are imprecise and represented by sets of distributions.
The first idea behind the imprecise SHAP is a new approach for computing the marginal contribution of a feature.
The second idea is an attempt to consider a general approach to calculating and reducing interval-valued Shapley values.
arXiv Detail & Related papers (2021-06-16T20:30:26Z) - Explaining predictive models using Shapley values and non-parametric
vine copulas [2.6774008509840996]
We propose two new approaches for modelling the dependence between the features.
The performance of the proposed methods is evaluated on simulated data sets and a real data set.
Experiments demonstrate that the vine copula approaches give more accurate approximations to the true Shapley values than its competitors.
arXiv Detail & Related papers (2021-02-12T09:43:28Z) - Explaining predictive models with mixed features using Shapley values
and conditional inference trees [1.8065361710947976]
Shapley values stand out as a sound method to explain predictions from any type of machine learning model.
We propose a method to explain mixed dependent features by modeling the dependence structure of the features using conditional inference trees.
arXiv Detail & Related papers (2020-07-02T11:25:45Z) - Rigorous Explanation of Inference on Probabilistic Graphical Models [17.96228289921288]
We propose GraphShapley to integrate the decomposability of Shapley values, the structure of computation MRFs, and the iterative nature of BP inference.
On nine graphs, we demonstrate that GraphShapley provides sensible and practical explanations.
arXiv Detail & Related papers (2020-04-21T14:57:12Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.