Approximating Score-based Explanation Techniques Using Conformal
Regression
- URL: http://arxiv.org/abs/2308.11975v1
- Date: Wed, 23 Aug 2023 07:50:43 GMT
- Title: Approximating Score-based Explanation Techniques Using Conformal
Regression
- Authors: Amr Alkhatib, Henrik Bostr\"om, Sofiane Ennadir, Ulf Johansson
- Abstract summary: Score-based explainable machine-learning techniques are often used to understand the logic behind black-box models.
We propose and investigate the use of computationally less costly regression models for approximating the output of score-based explanation techniques, such as SHAP.
We present results from a large-scale empirical investigation, in which the approximate explanations generated by our proposed models are evaluated with respect to efficiency.
- Score: 0.1843404256219181
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Score-based explainable machine-learning techniques are often used to
understand the logic behind black-box models. However, such explanation
techniques are often computationally expensive, which limits their application
in time-critical contexts. Therefore, we propose and investigate the use of
computationally less costly regression models for approximating the output of
score-based explanation techniques, such as SHAP. Moreover, validity guarantees
for the approximated values are provided by the employed inductive conformal
prediction framework. We propose several non-conformity measures designed to
take the difficulty of approximating the explanations into account while
keeping the computational cost low. We present results from a large-scale
empirical investigation, in which the approximate explanations generated by our
proposed models are evaluated with respect to efficiency (interval size). The
results indicate that the proposed method can significantly improve execution
time compared to the fast version of SHAP, TreeSHAP. The results also suggest
that the proposed method can produce tight intervals, while providing validity
guarantees. Moreover, the proposed approach allows for comparing explanations
of different approximation methods and selecting a method based on how
informative (tight) are the predicted intervals.
Related papers
- Building Conformal Prediction Intervals with Approximate Message Passing [14.951392270119461]
Conformal prediction is a powerful tool for building prediction intervals that are valid in a distribution-free way.
We propose a novel algorithm based on Approximate Message Passing (AMP) to accelerate the computation of prediction intervals.
We show that our method produces prediction intervals that are close to the baseline methods, while being orders of magnitude faster.
arXiv Detail & Related papers (2024-10-21T20:34:33Z) - Online estimation methods for irregular autoregressive models [0.0]
Currently available methods for addressing this problem, the so-called online learning methods, use current parameter estimations and novel data to update the estimators.
In this work we consider three online learning algorithms for parameters estimation in the context of time series models.
arXiv Detail & Related papers (2023-01-31T19:52:04Z) - Spectral Representation Learning for Conditional Moment Models [33.34244475589745]
We propose a procedure that automatically learns representations with controlled measures of ill-posedness.
Our method approximates a linear representation defined by the spectral decomposition of a conditional expectation operator.
We show this representation can be efficiently estimated from data, and establish L2 consistency for the resulting estimator.
arXiv Detail & Related papers (2022-10-29T07:48:29Z) - The Stochastic Proximal Distance Algorithm [5.3315823983402755]
We propose and analyze a class of iterative optimization methods that recover a desired constrained estimation problem as a penalty parameter.
We extend recent theoretical devices to establish finite error bounds and a complete characterization of convergence rates.
We validate our analysis via a thorough empirical study, also showing that unsurprisingly, the proposed method outpaces batch versions on popular learning tasks.
arXiv Detail & Related papers (2022-10-21T22:07:28Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Fast Rates for Contextual Linear Optimization [52.39202699484225]
We show that a naive plug-in approach achieves regret convergence rates that are significantly faster than methods that directly optimize downstream decision performance.
Our results are overall positive for practice: predictive models are easy and fast to train using existing tools, simple to interpret, and, as we show, lead to decisions that perform very well.
arXiv Detail & Related papers (2020-11-05T18:43:59Z) - BERT Loses Patience: Fast and Robust Inference with Early Exit [91.26199404912019]
We propose Patience-based Early Exit as a plug-and-play technique to improve the efficiency and robustness of a pretrained language model.
Our approach improves inference efficiency as it allows the model to make a prediction with fewer layers.
arXiv Detail & Related papers (2020-06-07T13:38:32Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.