Explainability as statistical inference
- URL: http://arxiv.org/abs/2212.03131v3
- Date: Fri, 29 Dec 2023 13:36:59 GMT
- Title: Explainability as statistical inference
- Authors: Hugo Henri Joseph Senetaire, Damien Garreau, Jes Frellsen,
Pierre-Alexandre Mattei
- Abstract summary: We propose a general deep probabilistic model designed to produce interpretable predictions.
The model parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture.
We show experimentally that using multiple imputation provides more reasonable interpretations.
- Score: 29.74336283497203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A wide variety of model explanation approaches have been proposed in recent
years, all guided by very different rationales and heuristics. In this paper,
we take a new route and cast interpretability as a statistical inference
problem. We propose a general deep probabilistic model designed to produce
interpretable predictions. The model parameters can be learned via maximum
likelihood, and the method can be adapted to any predictor network architecture
and any type of prediction problem. Our method is a case of amortized
interpretability models, where a neural network is used as a selector to allow
for fast interpretation at inference time. Several popular interpretability
methods are shown to be particular cases of regularised maximum likelihood for
our general model. We propose new datasets with ground truth selection which
allow for the evaluation of the features importance map. Using these datasets,
we show experimentally that using multiple imputation provides more reasonable
interpretations.
Related papers
- Estimating Causal Effects from Learned Causal Networks [56.14597641617531]
We propose an alternative paradigm for answering causal-effect queries over discrete observable variables.
We learn the causal Bayesian network and its confounding latent variables directly from the observational data.
We show that this emphmodel completion learning approach can be more effective than estimand approaches.
arXiv Detail & Related papers (2024-08-26T08:39:09Z) - Towards Generalizable and Interpretable Motion Prediction: A Deep
Variational Bayes Approach [54.429396802848224]
This paper proposes an interpretable generative model for motion prediction with robust generalizability to out-of-distribution cases.
For interpretability, the model achieves the target-driven motion prediction by estimating the spatial distribution of long-term destinations.
Experiments on motion prediction datasets validate that the fitted model can be interpretable and generalizable.
arXiv Detail & Related papers (2024-03-10T04:16:04Z) - A Lightweight Generative Model for Interpretable Subject-level Prediction [0.07989135005592125]
We propose a technique for single-subject prediction that is inherently interpretable.
Experiments demonstrate that the resulting model can be efficiently inverted to make accurate subject-level predictions.
arXiv Detail & Related papers (2023-06-19T18:20:29Z) - Predictive Multiplicity in Probabilistic Classification [25.111463701666864]
We present a framework for measuring predictive multiplicity in probabilistic classification.
We demonstrate the incidence and prevalence of predictive multiplicity in real-world tasks.
Our results emphasize the need to report predictive multiplicity more widely.
arXiv Detail & Related papers (2022-06-02T16:25:29Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Autoregressive Quantile Flows for Predictive Uncertainty Estimation [7.184701179854522]
We propose Autoregressive Quantile Flows, a flexible class of probabilistic models over high-dimensional variables.
These models are instances of autoregressive flows trained using a novel objective based on proper scoring rules.
arXiv Detail & Related papers (2021-12-09T01:11:26Z) - Instance-Based Neural Dependency Parsing [56.63500180843504]
We develop neural models that possess an interpretable inference process for dependency parsing.
Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set.
arXiv Detail & Related papers (2021-09-28T05:30:52Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - A comprehensive study on the prediction reliability of graph neural
networks for virtual screening [0.0]
We investigate the effects of model architectures, regularization methods, and loss functions on the prediction performance and reliability of classification results.
Our result highlights that correct choice of regularization and inference methods is evidently important to achieve high success rate.
arXiv Detail & Related papers (2020-03-17T10:13:31Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.