AcME-AD: Accelerated Model Explanations for Anomaly Detection
- URL: http://arxiv.org/abs/2403.01245v1
- Date: Sat, 2 Mar 2024 16:11:58 GMT
- Title: AcME-AD: Accelerated Model Explanations for Anomaly Detection
- Authors: Valentina Zaccaria, David Dandolo, Chiara Masiero, Gian Antonio Susto
- Abstract summary: AcME-AD is a model-agnostic, efficient solution for interoperability.
It offers local feature importance scores and a what-if analysis tool, shedding light on the factors contributing to each anomaly.
This paper elucidates AcME-AD's foundation, its benefits over existing methods, and validates its effectiveness with tests on both synthetic and real datasets.
- Score: 5.702288833888639
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Pursuing fast and robust interpretability in Anomaly Detection is crucial,
especially due to its significance in practical applications. Traditional
Anomaly Detection methods excel in outlier identification but are often
black-boxes, providing scant insights into their decision-making process. This
lack of transparency compromises their reliability and hampers their adoption
in scenarios where comprehending the reasons behind anomaly detection is vital.
At the same time, getting explanations quickly is paramount in practical
scenarios. To bridge this gap, we present AcME-AD, a novel approach rooted in
Explainable Artificial Intelligence principles, designed to clarify Anomaly
Detection models for tabular data. AcME-AD transcends the constraints of
model-specific or resource-heavy explainability techniques by delivering a
model-agnostic, efficient solution for interoperability. It offers local
feature importance scores and a what-if analysis tool, shedding light on the
factors contributing to each anomaly, thus aiding root cause analysis and
decision-making. This paper elucidates AcME-AD's foundation, its benefits over
existing methods, and validates its effectiveness with tests on both synthetic
and real datasets. AcME-AD's implementation and experiment replication code is
accessible in a public repository.
Related papers
- Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - MeLIAD: Interpretable Few-Shot Anomaly Detection with Metric Learning and Entropy-based Scoring [2.394081903745099]
We propose MeLIAD, a novel methodology for interpretable anomaly detection.
MeLIAD is based on metric learning and achieves interpretability by design without relying on any prior distribution assumptions of true anomalies.
Experiments on five public benchmark datasets, including quantitative and qualitative evaluation of interpretability, demonstrate that MeLIAD achieves improved anomaly detection and localization performance.
arXiv Detail & Related papers (2024-09-20T16:01:43Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - A Discrepancy Aware Framework for Robust Anomaly Detection [51.710249807397695]
We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
arXiv Detail & Related papers (2023-10-11T15:21:40Z) - Enhancing Interpretability and Generalizability in Extended Isolation Forests [5.139809663513828]
Extended Isolation Forest Feature Importance (ExIFFI) is a method that explains predictions made by Extended Isolation Forest (EIF) models.
EIF+ is designed to enhance the model's ability to detect unseen anomalies through a revised splitting strategy.
ExIFFI outperforms other unsupervised interpretability methods on 8 of 11 real-world datasets.
arXiv Detail & Related papers (2023-10-09T07:24:04Z) - Causality-Aware Local Interpretable Model-Agnostic Explanations [7.412445894287709]
We propose a novel extension to a widely used local and model-agnostic explainer, which encodes explicit causal relationships within the data surrounding the instance being explained.
Our approach overcomes the original method in terms of faithfully replicating the black-box model's mechanism and the consistency and reliability of the generated explanations.
arXiv Detail & Related papers (2022-12-10T10:12:27Z) - Data-Efficient and Interpretable Tabular Anomaly Detection [54.15249463477813]
We propose a novel framework that adapts a white-box model class, Generalized Additive Models, to detect anomalies.
In addition, the proposed framework, DIAD, can incorporate a small amount of labeled data to further boost anomaly detection performances in semi-supervised settings.
arXiv Detail & Related papers (2022-03-03T22:02:56Z) - MUC-driven Feature Importance Measurement and Adversarial Analysis for
Random Forest [1.5896078006029473]
We leverage formal methods and logical reasoning to develop a novel model-specific method for explaining the prediction of Random Forest (RF)
Our approach is centered around Minimal Unsatisfiable Cores (MUC) and provides a comprehensive solution for feature importance, covering local and global aspects, and adversarial sample analysis.
Our method can produce a user-centered report, which helps provide recommendations in real-life applications.
arXiv Detail & Related papers (2022-02-25T06:15:47Z) - A Simple Information-Based Approach to Unsupervised Domain-Adaptive
Aspect-Based Sentiment Analysis [58.124424775536326]
We propose a simple but effective technique based on mutual information to extract their term.
Experiment results show that our proposed method outperforms the state-of-the-art methods for cross-domain ABSA by 4.32% Micro-F1.
arXiv Detail & Related papers (2022-01-29T10:18:07Z) - A new interpretable unsupervised anomaly detection method based on
residual explanation [47.187609203210705]
We present RXP, a new interpretability method to deal with the limitations for AE-based AD in large-scale systems.
It stands out for its implementation simplicity, low computational cost and deterministic behavior.
In an experiment using data from a real heavy-haul railway line, the proposed method achieved superior performance compared to SHAP.
arXiv Detail & Related papers (2021-03-14T15:35:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.