Leveraging Black-box Models to Assess Feature Importance in Unconditional Distribution
- URL: http://arxiv.org/abs/2412.05759v1
- Date: Sat, 07 Dec 2024 23:00:21 GMT
- Title: Leveraging Black-box Models to Assess Feature Importance in Unconditional Distribution
- Authors: Jing Zhou, Chunlin Li,
- Abstract summary: We develop an approximation method to compute the feature importance curves relevant to the unconditional distribution of outcomes.<n>Our approximation method produces sparse and faithful results, and is computationally efficient.
- Score: 12.998592834144127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding how changes in explanatory features affect the unconditional distribution of the outcome is important in many applications. However, existing black-box predictive models are not readily suited for analyzing such questions. In this work, we develop an approximation method to compute the feature importance curves relevant to the unconditional distribution of outcomes, while leveraging the power of pre-trained black-box predictive models. The feature importance curves measure the changes across quantiles of outcome distribution given an external impact of change in the explanatory features. Through extensive numerical experiments and real data examples, we demonstrate that our approximation method produces sparse and faithful results, and is computationally efficient.
Related papers
- Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Partially factorized variational inference for high-dimensional mixed models [0.0]
Variational inference is a popular way to perform such computations, especially in the Bayesian context.
We show that standard mean-field variational inference dramatically underestimates posterior uncertainty in high-dimensions.
We then show how appropriately relaxing the mean-field assumption leads to methods whose uncertainty quantification does not deteriorate in high-dimensions.
arXiv Detail & Related papers (2023-12-20T16:12:37Z) - Model-agnostic variable importance for predictive uncertainty: an entropy-based approach [1.912429179274357]
We show how existing methods in explainability can be extended to uncertainty-aware models.
We demonstrate the utility of these approaches to understand both the sources of uncertainty and their impact on model performance.
arXiv Detail & Related papers (2023-10-19T15:51:23Z) - On the Properties and Estimation of Pointwise Mutual Information Profiles [49.877314063833296]
The pointwise mutual information profile, or simply profile, is the distribution of pointwise mutual information for a given pair of random variables.
We introduce a novel family of distributions, Bend and Mix Models, for which the profile can be accurately estimated using Monte Carlo methods.
arXiv Detail & Related papers (2023-10-16T10:02:24Z) - Invariant Probabilistic Prediction [45.90606906307022]
We show that arbitrary distribution shifts do not, in general, admit invariant and robust probabilistic predictions.
We propose a method to yield invariant probabilistic predictions, called IPP, and study the consistency of the underlying parameters.
arXiv Detail & Related papers (2023-09-18T18:50:24Z) - Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
Processes [52.92110730286403]
It is commonly believed that the marginal likelihood should be reminiscent of cross-validation metrics and that both should deteriorate with larger input dimensions.
We prove that by tuning hyper parameters, the performance, as measured by the marginal likelihood, improves monotonically with the input dimension.
We also prove that cross-validation metrics exhibit qualitatively different behavior that is characteristic of double descent.
arXiv Detail & Related papers (2022-10-14T08:09:33Z) - Data-Driven Influence Functions for Optimization-Based Causal Inference [105.5385525290466]
We study a constructive algorithm that approximates Gateaux derivatives for statistical functionals by finite differencing.
We study the case where probability distributions are not known a priori but need to be estimated from data.
arXiv Detail & Related papers (2022-08-29T16:16:22Z) - Marginal Effects for Non-Linear Prediction Functions [0.7349727826230864]
We introduce a new class of marginal effects termed forward marginal effects.
We argue against summarizing feature effects of a non-linear prediction function in a single metric such as the average marginal effect.
arXiv Detail & Related papers (2022-01-21T18:47:38Z) - Long Story Short: Omitted Variable Bias in Causal Machine Learning [26.60315380737132]
We develop a theory of omitted variable bias for a wide range of common causal parameters.
We show how simple plausibility judgments on the maximum explanatory power of omitted variables are sufficient to bound the magnitude of the bias.
We provide flexible and efficient statistical inference methods for the bounds, which can leverage modern machine learning algorithms for estimation.
arXiv Detail & Related papers (2021-12-26T15:38:23Z) - Evaluating Sensitivity to the Stick-Breaking Prior in Bayesian
Nonparametrics [85.31247588089686]
We show that variational Bayesian methods can yield sensitivities with respect to parametric and nonparametric aspects of Bayesian models.
We provide both theoretical and empirical support for our variational approach to Bayesian sensitivity analysis.
arXiv Detail & Related papers (2021-07-08T03:40:18Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - BayesIMP: Uncertainty Quantification for Causal Data Fusion [52.184885680729224]
We study the causal data fusion problem, where datasets pertaining to multiple causal graphs are combined to estimate the average treatment effect of a target variable.
We introduce a framework which combines ideas from probabilistic integration and kernel mean embeddings to represent interventional distributions in the reproducing kernel Hilbert space.
arXiv Detail & Related papers (2021-06-07T10:14:18Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.