Generative Perturbation Analysis for Probabilistic Black-Box Anomaly
Attribution
- URL: http://arxiv.org/abs/2308.04708v1
- Date: Wed, 9 Aug 2023 04:59:06 GMT
- Title: Generative Perturbation Analysis for Probabilistic Black-Box Anomaly
Attribution
- Authors: Tsuyoshi Id\'e and Naoki Abe
- Abstract summary: We address the task of probabilistic anomaly attribution in the black-box regression setting.
This task differs from the standard XAI (explainable AI) scenario, since we wish to explain the anomalous deviation from a black-box prediction rather than the black-box model itself.
We propose a novel framework for probabilistic anomaly attribution that allows us to not only compute attribution scores as the predictive mean but also quantify the uncertainty of those scores.
- Score: 2.22999837458579
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We address the task of probabilistic anomaly attribution in the black-box
regression setting, where the goal is to compute the probability distribution
of the attribution score of each input variable, given an observed anomaly. The
training dataset is assumed to be unavailable. This task differs from the
standard XAI (explainable AI) scenario, since we wish to explain the anomalous
deviation from a black-box prediction rather than the black-box model itself.
We begin by showing that mainstream model-agnostic explanation methods, such
as the Shapley values, are not suitable for this task because of their
``deviation-agnostic property.'' We then propose a novel framework for
probabilistic anomaly attribution that allows us to not only compute
attribution scores as the predictive mean but also quantify the uncertainty of
those scores. This is done by considering a generative process for
perturbations that counter-factually bring the observed anomalous observation
back to normalcy. We introduce a variational Bayes algorithm for deriving the
distributions of per variable attribution scores. To the best of our knowledge,
this is the first probabilistic anomaly attribution framework that is free from
being deviation-agnostic.
Related papers
- Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Invariant Probabilistic Prediction [45.90606906307022]
We show that arbitrary distribution shifts do not, in general, admit invariant and robust probabilistic predictions.
We propose a method to yield invariant probabilistic predictions, called IPP, and study the consistency of the underlying parameters.
arXiv Detail & Related papers (2023-09-18T18:50:24Z) - Deep Evidential Learning for Bayesian Quantile Regression [3.6294895527930504]
It is desirable to have accurate uncertainty estimation from a single deterministic forward-pass model.
This paper proposes a deep Bayesian quantile regression model that can estimate the quantiles of a continuous target distribution without the Gaussian assumption.
arXiv Detail & Related papers (2023-08-21T11:42:16Z) - Variational Prediction [95.00085314353436]
We present a technique for learning a variational approximation to the posterior predictive distribution using a variational bound.
This approach can provide good predictive distributions without test time marginalization costs.
arXiv Detail & Related papers (2023-07-14T18:19:31Z) - Black-Box Anomaly Attribution [13.455748795087493]
When a black-box machine learning model deviates from the true observation, what can be said about the reason behind that deviation?
This is a fundamental and ubiquitous question that the end user in a business or industrial AI application often asks.
We propose a novel likelihood-based attribution framework we call the likelihood compensation''
arXiv Detail & Related papers (2023-05-29T01:42:32Z) - Shortcomings of Top-Down Randomization-Based Sanity Checks for
Evaluations of Deep Neural Network Explanations [67.40641255908443]
We identify limitations of model-randomization-based sanity checks for the purpose of evaluating explanations.
Top-down model randomization preserves scales of forward pass activations with high probability.
arXiv Detail & Related papers (2022-11-22T18:52:38Z) - Anomaly Attribution with Likelihood Compensation [14.99385222547436]
This paper addresses the task of explaining anomalous predictions of a black-box regression model.
Given model deviation from the expected value, infer the responsibility score of each of the input variables.
To the best of our knowledge, this is the first principled framework that computes a responsibility score for real valued anomalous model deviations.
arXiv Detail & Related papers (2022-08-23T02:00:20Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Estimation of Accurate and Calibrated Uncertainties in Deterministic
models [0.8702432681310401]
We devise a method to transform a deterministic prediction into a probabilistic one.
We show that for doing so, one has to compromise between the accuracy and the reliability (calibration) of such a model.
We show several examples both with synthetic data, where the underlying hidden noise can accurately be recovered, and with large real-world datasets.
arXiv Detail & Related papers (2020-03-11T04:02:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.