A general-purpose method for applying Explainable AI for Anomaly
Detection
- URL: http://arxiv.org/abs/2207.11564v1
- Date: Sat, 23 Jul 2022 17:56:01 GMT
- Title: A general-purpose method for applying Explainable AI for Anomaly
Detection
- Authors: John Sipple and Abdou Youssef
- Abstract summary: The need for explainable AI (XAI) is well established but relatively little has been published outside of the supervised learning paradigm.
This paper focuses on a principled approach to applying explainability and interpretability to the task of unsupervised anomaly detection.
- Score: 6.09170287691728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The need for explainable AI (XAI) is well established but relatively little
has been published outside of the supervised learning paradigm. This paper
focuses on a principled approach to applying explainability and
interpretability to the task of unsupervised anomaly detection. We argue that
explainability is principally an algorithmic task and interpretability is
principally a cognitive task, and draw on insights from the cognitive sciences
to propose a general-purpose method for practical diagnosis using explained
anomalies. We define Attribution Error, and demonstrate, using real-world
labeled datasets, that our method based on Integrated Gradients (IG) yields
significantly lower attribution errors than alternative methods.
Related papers
- AcME-AD: Accelerated Model Explanations for Anomaly Detection [5.702288833888639]
AcME-AD is a model-agnostic, efficient solution for interoperability.
It offers local feature importance scores and a what-if analysis tool, shedding light on the factors contributing to each anomaly.
This paper elucidates AcME-AD's foundation, its benefits over existing methods, and validates its effectiveness with tests on both synthetic and real datasets.
arXiv Detail & Related papers (2024-03-02T16:11:58Z) - PUAD: Frustratingly Simple Method for Robust Anomaly Detection [0.0]
We argue that logical anomalies, such as the wrong number of objects, can not be well-represented by the spatial feature maps.
We propose a method that incorporates a simple out-of-distribution detection method on the feature space against state-of-the-art reconstruction-based approaches.
Our method achieves state-of-the-art performance on the MVTec LOCO AD dataset.
arXiv Detail & Related papers (2024-02-23T06:57:31Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Framing Algorithmic Recourse for Anomaly Detection [18.347886926848563]
We present an approach -- Context preserving Algorithmic Recourse for Anomalies in Tabular data (CARAT)
CARAT uses a transformer based encoder-decoder model to explain an anomaly by finding features with low likelihood.
Semantically coherent counterfactuals are generated by modifying the highlighted features, using the overall context of features in the anomalous instance(s)
arXiv Detail & Related papers (2022-06-29T03:30:51Z) - Challenges in Applying Explainability Methods to Improve the Fairness of
NLP Models [7.022948483613113]
Motivations for methods in explainable artificial intelligence (XAI) often include detecting, quantifying and mitigating bias.
In this paper, we briefly review trends in explainability and fairness in NLP research, identify the current practices in which explainability methods are applied to detect and mitigate bias, and investigate the barriers preventing XAI methods from being used more widely in tackling fairness issues.
arXiv Detail & Related papers (2022-06-08T15:09:04Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Integrating Prior Knowledge in Post-hoc Explanations [3.6066164404432883]
Post-hoc interpretability methods aim at explaining to a user the predictions of a trained decision model.
We propose to define a cost function that explicitly integrates prior knowledge into the interpretability objectives.
We propose a new interpretability method called Knowledge Integration in Counterfactual Explanation (KICE) to optimize it.
arXiv Detail & Related papers (2022-04-25T13:09:53Z) - Leveraging Unlabeled Data for Entity-Relation Extraction through
Probabilistic Constraint Satisfaction [54.06292969184476]
We study the problem of entity-relation extraction in the presence of symbolic domain knowledge.
Our approach employs semantic loss which captures the precise meaning of a logical sentence.
With a focus on low-data regimes, we show that semantic loss outperforms the baselines by a wide margin.
arXiv Detail & Related papers (2021-03-20T00:16:29Z) - Disambiguation of weak supervision with exponential convergence rates [88.99819200562784]
In supervised learning, data are annotated with incomplete yet discriminative information.
In this paper, we focus on partial labelling, an instance of weak supervision where, from a given input, we are given a set of potential targets.
We propose an empirical disambiguation algorithm to recover full supervision from weak supervision.
arXiv Detail & Related papers (2021-02-04T18:14:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.