Partially Interpretable Estimators (PIE): Black-Box-Refined
Interpretable Machine Learning
- URL: http://arxiv.org/abs/2105.02410v1
- Date: Thu, 6 May 2021 03:06:34 GMT
- Title: Partially Interpretable Estimators (PIE): Black-Box-Refined
Interpretable Machine Learning
- Authors: Tong Wang, Jingyi Yang, Yunyi Li, Boxiang Wang
- Abstract summary: We propose Partially Interpretable Estimators (PIE) which attribute a prediction to individual features via an interpretable model.
We design an iterative training algorithm to jointly train the two types of models.
Experimental results show that PIE is highly competitive to black-box models while outperforming interpretable baselines.
- Score: 5.479705009242287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose Partially Interpretable Estimators (PIE) which attribute a
prediction to individual features via an interpretable model, while a
(possibly) small part of the PIE prediction is attributed to the interaction of
features via a black-box model, with the goal to boost the predictive
performance while maintaining interpretability. As such, the interpretable
model captures the main contributions of features, and the black-box model
attempts to complement the interpretable piece by capturing the "nuances" of
feature interactions as a refinement. We design an iterative training algorithm
to jointly train the two types of models. Experimental results show that PIE is
highly competitive to black-box models while outperforming interpretable
baselines. In addition, the understandability of PIE is comparable to simple
linear models as validated via a human evaluation.
Related papers
- Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Exploring the cloud of feature interaction scores in a Rashomon set [17.775145325515993]
We introduce the feature interaction score (FIS) in the context of a Rashomon set.
We demonstrate the properties of the FIS via synthetic data and draw connections to other areas of statistics.
Our results suggest that the proposed FIS can provide valuable insights into the nature of feature interactions in machine learning models.
arXiv Detail & Related papers (2023-05-17T13:05:26Z) - Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [50.3686606679048]
We show that pre-trained language models with few examples show strong prediction bias across labels.
Although few-shot fine-tuning can mitigate the prediction bias, our analysis shows models gain performance improvement by capturing non-task-related features.
These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior.
arXiv Detail & Related papers (2022-04-17T15:55:18Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Influence Tuning: Demoting Spurious Correlations via Instance
Attribution and Instance-Driven Updates [26.527311287924995]
influence tuning can help deconfounding the model from spurious patterns in data.
We show that in a controlled setup, influence tuning can help deconfounding the model from spurious patterns in data.
arXiv Detail & Related papers (2021-10-07T06:59:46Z) - Instance-Based Neural Dependency Parsing [56.63500180843504]
We develop neural models that possess an interpretable inference process for dependency parsing.
Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set.
arXiv Detail & Related papers (2021-09-28T05:30:52Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z) - A Causal Lens for Peeking into Black Box Predictive Models: Predictive
Model Interpretation via Causal Attribution [3.3758186776249928]
We aim to address this problem in settings where the predictive model is a black box.
We reduce the problem of interpreting a black box predictive model to that of estimating the causal effects of each of the model inputs on the model output.
We show how the resulting causal attribution of responsibility for model output to the different model inputs can be used to interpret the predictive model and to explain its predictions.
arXiv Detail & Related papers (2020-08-01T23:20:57Z) - A Semiparametric Approach to Interpretable Machine Learning [9.87381939016363]
Black box models in machine learning have demonstrated excellent predictive performance in complex problems and high-dimensional settings.
Their lack of transparency and interpretability restrict the applicability of such models in critical decision-making processes.
We propose a novel approach to trading off interpretability and performance in prediction models using ideas from semiparametric statistics.
arXiv Detail & Related papers (2020-06-08T16:38:15Z) - An interpretable neural network model through piecewise linear
approximation [7.196650216279683]
We propose a hybrid interpretable model that combines a piecewise linear component and a nonlinear component.
The first component describes the explicit feature contributions by piecewise linear approximation to increase the expressiveness of the model.
The other component uses a multi-layer perceptron to capture feature interactions and implicit nonlinearity, and increase the prediction performance.
arXiv Detail & Related papers (2020-01-20T14:32:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.