Explaining a Series of Models by Propagating Local Feature Attributions
- URL: http://arxiv.org/abs/2105.00108v1
- Date: Fri, 30 Apr 2021 22:20:58 GMT
- Title: Explaining a Series of Models by Propagating Local Feature Attributions
- Authors: Hugh Chen, Scott M. Lundberg, Su-In Lee
- Abstract summary: Pipelines involving several machine learning models improve performance in many domains but are difficult to understand.
We introduce a framework to propagate local feature attributions through complex pipelines of models based on a connection to the Shapley value.
Our framework enables us to draw higher-level conclusions based on groups of gene expression features for Alzheimer's and breast cancer histologic grade prediction.
- Score: 9.66840768820136
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pipelines involving a series of several machine learning models (e.g.,
stacked generalization ensembles, neural network feature extractors) improve
performance in many domains but are difficult to understand. To improve their
transparency, we introduce a framework to propagate local feature attributions
through complex pipelines of models based on a connection to the Shapley value.
Our framework enables us to (1) draw higher-level conclusions based on groups
of gene expression features for Alzheimer's and breast cancer histologic grade
prediction, (2) draw important insights about the errors a mortality prediction
model makes by explaining a loss that is a non-linear transformation of the
model's output, (3) explain pipelines of deep feature extractors fed into a
tree model for MNIST digit classification, and (4) interpret important consumer
scores and raw features in a stacked generalization setting to predict risk for
home equity line of credit applications. Importantly, in the consumer scoring
example, DeepSHAP is the only feature attribution technique we are aware of
that allows independent entities (e.g., lending institutions, credit bureaus)
to compute attributions for the original features without having to share their
proprietary models. Quantitatively comparing our framework to model-agnostic
approaches, we show that our approach is an order of magnitude faster while
providing equally salient explanations. In addition, we describe how to
incorporate an empirical baseline distribution, which allows us to (1)
demonstrate the bias of previous approaches that use a single baseline sample,
and (2) present a straightforward methodology for choosing meaningful baseline
distributions.
Related papers
- Graph-based Unsupervised Disentangled Representation Learning via Multimodal Large Language Models [42.17166746027585]
We introduce a bidirectional weighted graph-based framework to learn factorized attributes and their interrelations within complex data.
Specifically, we propose a $beta$-VAE based module to extract factors as the initial nodes of the graph.
By integrating these complementary modules, our model successfully achieves fine-grained, practical and unsupervised disentanglement.
arXiv Detail & Related papers (2024-07-26T15:32:21Z) - Bayesian Exploration of Pre-trained Models for Low-shot Image Classification [14.211305168954594]
This work proposes a simple and effective probabilistic model ensemble framework based on Gaussian processes.
We achieve the integration of prior knowledge by specifying the mean function with CLIP and the kernel function.
We demonstrate that our method consistently outperforms competitive ensemble baselines regarding predictive performance.
arXiv Detail & Related papers (2024-03-30T10:25:28Z) - Prospector Heads: Generalized Feature Attribution for Large Models & Data [82.02696069543454]
We introduce prospector heads, an efficient and interpretable alternative to explanation-based attribution methods.
We demonstrate how prospector heads enable improved interpretation and discovery of class-specific patterns in input data.
arXiv Detail & Related papers (2024-02-18T23:01:28Z) - Grouping Shapley Value Feature Importances of Random Forests for
explainable Yield Prediction [0.8543936047647136]
We explain the concept of Shapley values directly computed for groups of features and introduce an algorithm to compute them efficiently on tree structures.
We provide a blueprint for designing swarm plots that combine many local explanations for global understanding.
arXiv Detail & Related papers (2023-04-14T13:03:33Z) - Rethinking Log Odds: Linear Probability Modelling and Expert Advice in
Interpretable Machine Learning [8.831954614241234]
We introduce a family of interpretable machine learning models, with two broad additions: Linearised Additive Models (LAMs) and SubscaleHedge.
LAMs replace the ubiquitous logistic link function in General Additive Models (GAMs); and SubscaleHedge is an expert advice algorithm for combining base models trained on subsets of features called subscales.
arXiv Detail & Related papers (2022-11-11T17:21:57Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - Controlling for sparsity in sparse factor analysis models: adaptive
latent feature sharing for piecewise linear dimensionality reduction [2.896192909215469]
We propose a simple and tractable parametric feature allocation model which can address key limitations of current latent feature decomposition techniques.
We derive a novel adaptive Factor analysis (aFA), as well as, an adaptive probabilistic principle component analysis (aPPCA) capable of flexible structure discovery and dimensionality reduction.
We show that aPPCA and aFA can infer interpretable high level features both when applied on raw MNIST and when applied for interpreting autoencoder features.
arXiv Detail & Related papers (2020-06-22T16:09:11Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.