Exactly Computing do-Shapley Values
- URL: http://arxiv.org/abs/2602.07203v1
- Date: Fri, 06 Feb 2026 21:24:56 GMT
- Title: Exactly Computing do-Shapley Values
- Authors: R. Teal Witter, Álvaro Parafita, Tomas Garriga, Maximilian Muschalik, Fabian Fumagalli, Axel Brando, Lucas Rosenblatt,
- Abstract summary: We show how to compute do-Shapley values in time linear in the number of irreducible sets $r$.<n>We also prove that non-parametric identifiability of do-Shapley values requires only the identification of interventional effects for the $d$ singleton coalitions.
- Score: 16.29328788692574
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Structural Causal Models (SCM) are a powerful framework for describing complicated dynamics across the natural sciences. A particularly elegant way of interpreting SCMs is do-Shapley, a game-theoretic method of quantifying the average effect of $d$ variables across exponentially many interventions. Like Shapley values, computing do-Shapley values generally requires evaluating exponentially many terms. The foundation of our work is a reformulation of do-Shapley values in terms of the irreducible sets of the underlying SCM. Leveraging this insight, we can exactly compute do-Shapley values in time linear in the number of irreducible sets $r$, which itself can range from $d$ to $2^d$ depending on the graph structure of the SCM. Since $r$ is unknown a priori, we complement the exact algorithm with an estimator that, like general Shapley value estimators, can be run with any query budget. As the query budget approaches $r$, our estimators can produce more accurate estimates than prior methods by several orders of magnitude, and, when the budget reaches $r$, return the Shapley values up to machine precision. Beyond computational speed, we also reduce the identification burden: we prove that non-parametric identifiability of do-Shapley values requires only the identification of interventional effects for the $d$ singleton coalitions, rather than all classes.
Related papers
- Shapley-Inspired Feature Weighting in $k$-means with No Additional Hyperparameters [2.3940819037450987]
Clustering algorithms often assume all features contribute equally to the data structure.<n>We propose SHARK (Shapley Reweighted $k$-means), a feature-weighted clustering algorithm motivated by the use of Shapley values.<n>Experiments on synthetic and real-world data sets show that SHARK consistently matches or outperforms existing methods.
arXiv Detail & Related papers (2025-08-11T13:07:21Z) - FW-Shapley: Real-time Estimation of Weighted Shapley Values [21.562508939780532]
We present Fast Weighted Shapley, an amortized framework for efficiently computing weighted Shapley values.<n>We also show that our estimator's training procedure is theoretically valid even though we do not use ground truth weighted Shapley values during training.<n>For data valuation, we are much faster (14 times) while being comparable to the state-of-the-art KNN Shapley.
arXiv Detail & Related papers (2025-03-09T13:13:14Z) - Improving the Weighting Strategy in KernelSHAP [0.8057006406834466]
In Explainable AI (XAI) Shapley values are a popular framework for explaining predictions made by complex machine learning models.<n>We propose a novel modification of KernelSHAP which replaces the deterministic weights with ones to reduce the variance of the resulting Shapley value approximations.<n>Our methods can reduce the required number of contribution function evaluations by $5%$ to $50%$ while preserving the same accuracy of the approximated Shapley values.
arXiv Detail & Related papers (2024-10-07T10:02:31Z) - Fast Shapley Value Estimation: A Unified Approach [71.92014859992263]
We propose a straightforward and efficient Shapley estimator, SimSHAP, by eliminating redundant techniques.
In our analysis of existing approaches, we observe that estimators can be unified as a linear transformation of randomly summed values from feature subsets.
Our experiments validate the effectiveness of our SimSHAP, which significantly accelerates the computation of accurate Shapley values.
arXiv Detail & Related papers (2023-11-02T06:09:24Z) - Fast Approximation of the Shapley Values Based on Order-of-Addition
Experimental Designs [11.684428415506968]
In a $d$-player coalition game, calculating a Shapley value requires the evaluation of $d!$ or $2d$ marginal contribution values.
A common remedy is to take a random sample of the permutations to surrogate for the complete list of permutations.
We find an advanced sampling scheme can be designed to yield much more accurate estimation of the Shapley value.
arXiv Detail & Related papers (2023-09-16T08:28:15Z) - DU-Shapley: A Shapley Value Proxy for Efficient Dataset Valuation [23.646508094051768]
We consider the dataset valuation problem, that is, the problem of quantifying the incremental gain.
The Shapley value is a natural tool to perform dataset valuation due to its formal axiomatic justification.
We propose a novel approximation, referred to as discrete uniform Shapley, which is expressed as an expectation under a discrete uniform distribution.
arXiv Detail & Related papers (2023-06-03T10:22:50Z) - Efficient Shapley Values Estimation by Amortization for Text
Classification [66.7725354593271]
We develop an amortized model that directly predicts each input feature's Shapley Value without additional model evaluations.
Experimental results on two text classification datasets demonstrate that our amortized model estimates Shapley Values accurately with up to 60 times speedup.
arXiv Detail & Related papers (2023-05-31T16:19:13Z) - Nearly Minimax Optimal Reinforcement Learning for Linear Markov Decision
Processes [80.89852729380425]
We propose the first computationally efficient algorithm that achieves the nearly minimax optimal regret $tilde O(dsqrtH3K)$.
Our work provides a complete answer to optimal RL with linear MDPs, and the developed algorithm and theoretical tools may be of independent interest.
arXiv Detail & Related papers (2022-12-12T18:58:59Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - $\gamma$-ABC: Outlier-Robust Approximate Bayesian Computation Based on a
Robust Divergence Estimator [95.71091446753414]
We propose to use a nearest-neighbor-based $gamma$-divergence estimator as a data discrepancy measure.
Our method achieves significantly higher robustness than existing discrepancy measures.
arXiv Detail & Related papers (2020-06-13T06:09:27Z) - Towards Efficient Data Valuation Based on the Shapley Value [65.4167993220998]
We study the problem of data valuation by utilizing the Shapley value.
The Shapley value defines a unique payoff scheme that satisfies many desiderata for the notion of data value.
We propose a repertoire of efficient algorithms for approximating the Shapley value.
arXiv Detail & Related papers (2019-02-27T00:22:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.