Anomaly Attribution with Likelihood Compensation
- URL: http://arxiv.org/abs/2208.10679v1
- Date: Tue, 23 Aug 2022 02:00:20 GMT
- Title: Anomaly Attribution with Likelihood Compensation
- Authors: Tsuyoshi Id\'e, Amit Dhurandhar, Ji\v{r}\'i Navr\'atil, Moninder
Singh, Naoki Abe
- Abstract summary: This paper addresses the task of explaining anomalous predictions of a black-box regression model.
Given model deviation from the expected value, infer the responsibility score of each of the input variables.
To the best of our knowledge, this is the first principled framework that computes a responsibility score for real valued anomalous model deviations.
- Score: 14.99385222547436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the task of explaining anomalous predictions of a
black-box regression model. When using a black-box model, such as one to
predict building energy consumption from many sensor measurements, we often
have a situation where some observed samples may significantly deviate from
their prediction. It may be due to a sub-optimal black-box model, or simply
because those samples are outliers. In either case, one would ideally want to
compute a ``responsibility score'' indicative of the extent to which an input
variable is responsible for the anomalous output. In this work, we formalize
this task as a statistical inverse problem: Given model deviation from the
expected value, infer the responsibility score of each of the input variables.
We propose a new method called likelihood compensation (LC), which is founded
on the likelihood principle and computes a correction to each input variable.
To the best of our knowledge, this is the first principled framework that
computes a responsibility score for real valued anomalous model deviations. We
apply our approach to a real-world building energy prediction task and confirm
its utility based on expert feedback.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Source-Free Unsupervised Domain Adaptation with Hypothesis Consolidation
of Prediction Rationale [53.152460508207184]
Source-Free Unsupervised Domain Adaptation (SFUDA) is a challenging task where a model needs to be adapted to a new domain without access to target domain labels or source domain data.
This paper proposes a novel approach that considers multiple prediction hypotheses for each sample and investigates the rationale behind each hypothesis.
To achieve the optimal performance, we propose a three-step adaptation process: model pre-adaptation, hypothesis consolidation, and semi-supervised learning.
arXiv Detail & Related papers (2024-02-02T05:53:22Z) - Generative Perturbation Analysis for Probabilistic Black-Box Anomaly
Attribution [2.22999837458579]
We address the task of probabilistic anomaly attribution in the black-box regression setting.
This task differs from the standard XAI (explainable AI) scenario, since we wish to explain the anomalous deviation from a black-box prediction rather than the black-box model itself.
We propose a novel framework for probabilistic anomaly attribution that allows us to not only compute attribution scores as the predictive mean but also quantify the uncertainty of those scores.
arXiv Detail & Related papers (2023-08-09T04:59:06Z) - Black-Box Anomaly Attribution [13.455748795087493]
When a black-box machine learning model deviates from the true observation, what can be said about the reason behind that deviation?
This is a fundamental and ubiquitous question that the end user in a business or industrial AI application often asks.
We propose a novel likelihood-based attribution framework we call the likelihood compensation''
arXiv Detail & Related papers (2023-05-29T01:42:32Z) - $\Delta$-UQ: Accurate Uncertainty Quantification via Anchor
Marginalization [40.581619201120716]
We present $Delta$UQ -- a novel, general-purpose uncertainty estimator using the concept of anchoring in predictive models.
We find this uncertainty is deeply connected to improper sampling of the input data, and inherent noise, enabling us to estimate the total uncertainty in any system.
arXiv Detail & Related papers (2021-10-05T17:44:31Z) - Probabilistic Modeling for Human Mesh Recovery [73.11532990173441]
This paper focuses on the problem of 3D human reconstruction from 2D evidence.
We recast the problem as learning a mapping from the input to a distribution of plausible 3D poses.
arXiv Detail & Related papers (2021-08-26T17:55:11Z) - A Causal Lens for Peeking into Black Box Predictive Models: Predictive
Model Interpretation via Causal Attribution [3.3758186776249928]
We aim to address this problem in settings where the predictive model is a black box.
We reduce the problem of interpreting a black box predictive model to that of estimating the causal effects of each of the model inputs on the model output.
We show how the resulting causal attribution of responsibility for model output to the different model inputs can be used to interpret the predictive model and to explain its predictions.
arXiv Detail & Related papers (2020-08-01T23:20:57Z) - Individual Calibration with Randomized Forecasting [116.2086707626651]
We show that calibration for individual samples is possible in the regression setup if the predictions are randomized.
We design a training objective to enforce individual calibration and use it to train randomized regression functions.
arXiv Detail & Related papers (2020-06-18T05:53:10Z) - Showing Your Work Doesn't Always Work [73.63200097493576]
"Show Your Work: Improved Reporting of Experimental Results" advocates for reporting the expected validation effectiveness of the best-tuned model.
We analytically show that their estimator is biased and uses error-prone assumptions.
We derive an unbiased alternative and bolster our claims with empirical evidence from statistical simulation.
arXiv Detail & Related papers (2020-04-28T17:59:01Z) - Estimation of Accurate and Calibrated Uncertainties in Deterministic
models [0.8702432681310401]
We devise a method to transform a deterministic prediction into a probabilistic one.
We show that for doing so, one has to compromise between the accuracy and the reliability (calibration) of such a model.
We show several examples both with synthetic data, where the underlying hidden noise can accurately be recovered, and with large real-world datasets.
arXiv Detail & Related papers (2020-03-11T04:02:56Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.