AcME-AD: Accelerated Model Explanations for Anomaly Detection
- URL: http://arxiv.org/abs/2403.01245v1
- Date: Sat, 2 Mar 2024 16:11:58 GMT
- Title: AcME-AD: Accelerated Model Explanations for Anomaly Detection
- Authors: Valentina Zaccaria, David Dandolo, Chiara Masiero, Gian Antonio Susto
- Abstract summary: AcME-AD is a model-agnostic, efficient solution for interoperability.
It offers local feature importance scores and a what-if analysis tool, shedding light on the factors contributing to each anomaly.
This paper elucidates AcME-AD's foundation, its benefits over existing methods, and validates its effectiveness with tests on both synthetic and real datasets.
- Score: 5.702288833888639
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Pursuing fast and robust interpretability in Anomaly Detection is crucial,
especially due to its significance in practical applications. Traditional
Anomaly Detection methods excel in outlier identification but are often
black-boxes, providing scant insights into their decision-making process. This
lack of transparency compromises their reliability and hampers their adoption
in scenarios where comprehending the reasons behind anomaly detection is vital.
At the same time, getting explanations quickly is paramount in practical
scenarios. To bridge this gap, we present AcME-AD, a novel approach rooted in
Explainable Artificial Intelligence principles, designed to clarify Anomaly
Detection models for tabular data. AcME-AD transcends the constraints of
model-specific or resource-heavy explainability techniques by delivering a
model-agnostic, efficient solution for interoperability. It offers local
feature importance scores and a what-if analysis tool, shedding light on the
factors contributing to each anomaly, thus aiding root cause analysis and
decision-making. This paper elucidates AcME-AD's foundation, its benefits over
existing methods, and validates its effectiveness with tests on both synthetic
and real datasets. AcME-AD's implementation and experiment replication code is
accessible in a public repository.
Related papers
- Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - Word-Level ASR Quality Estimation for Efficient Corpus Sampling and
Post-Editing through Analyzing Attentions of a Reference-Free Metric [5.592917884093537]
The potential of quality estimation (QE) metrics is introduced and evaluated as a novel tool to enhance explainable artificial intelligence (XAI) in ASR systems.
The capabilities of the NoRefER metric are explored in identifying word-level errors to aid post-editors in refining ASR hypotheses.
arXiv Detail & Related papers (2024-01-20T16:48:55Z) - Model Stealing Attack against Graph Classification with Authenticity,
Uncertainty and Diversity [85.1927483219819]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - A Discrepancy Aware Framework for Robust Anomaly Detection [51.710249807397695]
We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
arXiv Detail & Related papers (2023-10-11T15:21:40Z) - Causality-Aware Local Interpretable Model-Agnostic Explanations [7.412445894287709]
We propose a novel extension to a widely used local and model-agnostic explainer, which encodes explicit causal relationships within the data surrounding the instance being explained.
Our approach overcomes the original method in terms of faithfully replicating the black-box model's mechanism and the consistency and reliability of the generated explanations.
arXiv Detail & Related papers (2022-12-10T10:12:27Z) - Active Learning-based Isolation Forest (ALIF): Enhancing Anomaly
Detection in Decision Support Systems [2.922007656878633]
ALIF is a lightweight modification of the popular Isolation Forest that proved superior performances with respect to other state-of-art algorithms.
The proposed approach is particularly appealing in the presence of a Decision Support System (DSS), a case that is increasingly popular in real-world scenarios.
arXiv Detail & Related papers (2022-07-08T14:36:38Z) - Data-Efficient and Interpretable Tabular Anomaly Detection [54.15249463477813]
We propose a novel framework that adapts a white-box model class, Generalized Additive Models, to detect anomalies.
In addition, the proposed framework, DIAD, can incorporate a small amount of labeled data to further boost anomaly detection performances in semi-supervised settings.
arXiv Detail & Related papers (2022-03-03T22:02:56Z) - MUC-driven Feature Importance Measurement and Adversarial Analysis for
Random Forest [1.5896078006029473]
We leverage formal methods and logical reasoning to develop a novel model-specific method for explaining the prediction of Random Forest (RF)
Our approach is centered around Minimal Unsatisfiable Cores (MUC) and provides a comprehensive solution for feature importance, covering local and global aspects, and adversarial sample analysis.
Our method can produce a user-centered report, which helps provide recommendations in real-life applications.
arXiv Detail & Related papers (2022-02-25T06:15:47Z) - A Simple Information-Based Approach to Unsupervised Domain-Adaptive
Aspect-Based Sentiment Analysis [58.124424775536326]
We propose a simple but effective technique based on mutual information to extract their term.
Experiment results show that our proposed method outperforms the state-of-the-art methods for cross-domain ABSA by 4.32% Micro-F1.
arXiv Detail & Related papers (2022-01-29T10:18:07Z) - A new interpretable unsupervised anomaly detection method based on
residual explanation [47.187609203210705]
We present RXP, a new interpretability method to deal with the limitations for AE-based AD in large-scale systems.
It stands out for its implementation simplicity, low computational cost and deterministic behavior.
In an experiment using data from a real heavy-haul railway line, the proposed method achieved superior performance compared to SHAP.
arXiv Detail & Related papers (2021-03-14T15:35:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.