Explaining a probabilistic prediction on the simplex with Shapley compositions
- URL: http://arxiv.org/abs/2408.01382v2
- Date: Wed, 12 Feb 2025 10:18:06 GMT
- Title: Explaining a probabilistic prediction on the simplex with Shapley compositions
- Authors: Paul-Gauthier Noé, Miquel Perelló-Nieto, Jean-François Bonastre, Peter Flach,
- Abstract summary: We introduce Shapley compositions as a well-founded way to properly explain a multiclass probabilistic prediction.<n>We prove that the Shapley composition is the unique quantity satisfying linearity, symmetry and efficiency on the Aitchison simplex.
- Score: 11.009289275721283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Originating in game theory, Shapley values are widely used for explaining a machine learning model's prediction by quantifying the contribution of each feature's value to the prediction. This requires a scalar prediction as in binary classification, whereas a multiclass probabilistic prediction is a discrete probability distribution, living on a multidimensional simplex. In such a multiclass setting the Shapley values are typically computed separately on each class in a one-vs-rest manner, ignoring the compositional nature of the output distribution. In this paper, we introduce Shapley compositions as a well-founded way to properly explain a multiclass probabilistic prediction, using the Aitchison geometry from compositional data analysis. We prove that the Shapley composition is the unique quantity satisfying linearity, symmetry and efficiency on the Aitchison simplex, extending the corresponding axiomatic properties of the standard Shapley value. We demonstrate this proper multiclass treatment in a range of scenarios.
Related papers
- Shapley Values: Paired-Sampling Approximations [0.0]
Shapley values have become a very popular tool to explain machine learning predictions.<n>Based on Shapley's fairness axioms, every input (feature component) gets a credit how it contributes to an output (prediction)<n>The only limitation in computing the Shapley values (credits) for many different predictions is of computational nature.
arXiv Detail & Related papers (2025-08-18T14:23:34Z) - Improving the Sampling Strategy in KernelSHAP [0.8057006406834466]
KernelSHAP framework enables us to approximate the Shapley values using a sampled subset of weighted conditional expectations.
We propose three main novel contributions: a stabilizing technique to reduce the variance of the weights in the current state-of-the-art strategy, a novel weighing scheme that corrects the Shapley kernel weights based on sampled subsets, and a straightforward strategy that includes the important subsets and integrates them with the corrected Shapley kernel weights.
arXiv Detail & Related papers (2024-10-07T10:02:31Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Efficient Shapley Values Estimation by Amortization for Text
Classification [66.7725354593271]
We develop an amortized model that directly predicts each input feature's Shapley Value without additional model evaluations.
Experimental results on two text classification datasets demonstrate that our amortized model estimates Shapley Values accurately with up to 60 times speedup.
arXiv Detail & Related papers (2023-05-31T16:19:13Z) - Probabilistic Conformal Prediction Using Conditional Random Samples [73.26753677005331]
PCP is a predictive inference algorithm that estimates a target variable by a discontinuous predictive set.
It is efficient and compatible with either explicit or implicit conditional generative models.
arXiv Detail & Related papers (2022-06-14T03:58:03Z) - Exact Shapley Values for Local and Model-True Explanations of Decision
Tree Ensembles [0.0]
We consider the application of Shapley values for explaining decision tree ensembles.
We present a novel approach to Shapley value-based feature attribution that can be applied to random forests and boosted decision trees.
arXiv Detail & Related papers (2021-12-16T20:16:02Z) - Is Shapley Explanation for a model unique? [0.0]
We explore the relationship between the distribution of a feature and its Shapley value.
Our assessment is that Shapley value for particular feature not only depends on its expected mean but on other moments as well such as variance.
It varies with model outcome (Probability/Log-odds/binary decision such as accept vs reject) and hence model application.
arXiv Detail & Related papers (2021-11-23T15:31:46Z) - groupShapley: Efficient prediction explanation with Shapley values for
feature groups [2.320417845168326]
Shapley values has established itself as one of the most appropriate and theoretically sound frameworks for explaining predictions from machine learning models.
The main drawback with Shapley values is that its computational complexity grows exponentially in the number of input features.
The present paper introduces groupShapley: a conceptually simple approach for dealing with the aforementioned bottlenecks.
arXiv Detail & Related papers (2021-06-23T08:16:14Z) - An Imprecise SHAP as a Tool for Explaining the Class Probability
Distributions under Limited Training Data [5.8010446129208155]
An imprecise SHAP is proposed for cases when the class probability distributions are imprecise and represented by sets of distributions.
The first idea behind the imprecise SHAP is a new approach for computing the marginal contribution of a feature.
The second idea is an attempt to consider a general approach to calculating and reducing interval-valued Shapley values.
arXiv Detail & Related papers (2021-06-16T20:30:26Z) - Video Prediction via Example Guidance [156.08546987158616]
In video prediction tasks, one major challenge is to capture the multi-modal nature of future contents and dynamics.
In this work, we propose a simple yet effective framework that can efficiently predict plausible future states.
arXiv Detail & Related papers (2020-07-03T14:57:24Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z) - CatBoostLSS -- An extension of CatBoost to probabilistic forecasting [91.3755431537592]
We propose a new framework that predicts the entire conditional distribution of a univariable response variable.
CatBoostLSS models all moments of a parametric distribution instead of the conditional mean only.
We present both a simulation study and real-world examples that demonstrate the benefits of our approach.
arXiv Detail & Related papers (2020-01-04T15:42:44Z) - From interpretability to inference: an estimation framework for universal approximators [0.0]
We present a novel framework for estimation and inference with the broad class of universal approximators.<n>Inference relies on analyzing the bias and variance properties of individual Shapley components.
arXiv Detail & Related papers (2019-03-11T10:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.