AR-Pro: Counterfactual Explanations for Anomaly Repair with Formal   Properties
        - URL: http://arxiv.org/abs/2410.24178v1
 - Date: Thu, 31 Oct 2024 17:43:53 GMT
 - Title: AR-Pro: Counterfactual Explanations for Anomaly Repair with Formal   Properties
 - Authors: Xiayan Ji, Anton Xue, Eric Wong, Oleg Sokolsky, Insup Lee, 
 - Abstract summary: Anomaly detection is widely used for identifying critical errors and suspicious behaviors, but current methods lack interpretability.
We leverage common properties of existing methods to introduce counterfactual explanations for anomaly detection.
A key advantage of this approach is that it enables a domain-independent formal specification of explainability desiderata.
 - Score: 12.71326587869053
 - License: http://creativecommons.org/licenses/by/4.0/
 - Abstract:   Anomaly detection is widely used for identifying critical errors and suspicious behaviors, but current methods lack interpretability. We leverage common properties of existing methods and recent advances in generative models to introduce counterfactual explanations for anomaly detection. Given an input, we generate its counterfactual as a diffusion-based repair that shows what a non-anomalous version should have looked like. A key advantage of this approach is that it enables a domain-independent formal specification of explainability desiderata, offering a unified framework for generating and evaluating explanations. We demonstrate the effectiveness of our anomaly explainability framework, AR-Pro, on vision (MVTec, VisA) and time-series (SWaT, WADI, HAI) anomaly datasets. The code used for the experiments is accessible at: https://github.com/xjiae/arpro. 
 
       
      
        Related papers
        - Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv  Detail & Related papers  (2024-10-08T17:59:03Z) - PARs: Predicate-based Association Rules for Efficient and Accurate
  Model-Agnostic Anomaly Explanation [2.280762565226767]
We present a novel approach for efficient and accurate model-agnostic anomaly explanation using Predicate-based Association Rules (PARs)
Our user study indicates that the anomaly explanation form of PARs is better comprehended and preferred by regular users of anomaly detection systems.
arXiv  Detail & Related papers  (2023-12-18T06:45:31Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation :   A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv  Detail & Related papers  (2023-11-27T13:14:06Z) - Don't Miss Out on Novelty: Importance of Novel Features for Deep Anomaly
  Detection [64.21963650519312]
Anomaly Detection (AD) is a critical task that involves identifying observations that do not conform to a learned model of normality.
We propose a novel approach to AD using explainability to capture such novel features as unexplained observations in the input space.
Our approach establishes a new state-of-the-art across multiple benchmarks, handling diverse anomaly types.
arXiv  Detail & Related papers  (2023-10-01T21:24:05Z) - Explanation Method for Anomaly Detection on Mixed Numerical and
  Categorical Spaces [0.9543943371833464]
We present EADMNC (Explainable Anomaly Detection on Mixed Numerical and Categorical spaces)
It adds explainability to the predictions obtained with the original model.
We report experimental results on extensive real-world data, particularly in the domain of network intrusion detection.
arXiv  Detail & Related papers  (2022-09-09T08:20:13Z) - Be Your Own Neighborhood: Detecting Adversarial Example by the
  Neighborhood Relations Built on Self-Supervised Learning [64.78972193105443]
This paper presents a novel AE detection framework, named trustworthy for predictions.
 performs the detection by distinguishing the AE's abnormal relation with its augmented versions.
An off-the-shelf Self-Supervised Learning (SSL) model is used to extract the representation and predict the label.
arXiv  Detail & Related papers  (2022-08-31T08:18:44Z) - Framing Algorithmic Recourse for Anomaly Detection [18.347886926848563]
We present an approach -- Context preserving Algorithmic Recourse for Anomalies in Tabular data (CARAT)
CARAT uses a transformer based encoder-decoder model to explain an anomaly by finding features with low likelihood.
Semantically coherent counterfactuals are generated by modifying the highlighted features, using the overall context of features in the anomalous instance(s)
arXiv  Detail & Related papers  (2022-06-29T03:30:51Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
  Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv  Detail & Related papers  (2022-05-31T04:57:06Z) - Diverse Counterfactual Explanations for Anomaly Detection in Time Series [26.88575131193757]
We propose a model-agnostic algorithm that generates counterfactual ensemble explanations for time series anomaly detection models.
Our method generates a set of diverse counterfactual examples, i.e., multiple versions of the original time series that are not considered anomalous by the detection model.
Our algorithm is applicable to any differentiable anomaly detection model.
arXiv  Detail & Related papers  (2022-03-21T16:30:34Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv  Detail & Related papers  (2020-08-02T11:19:36Z) - Interpreting Rate-Distortion of Variational Autoencoder and Using Model
  Uncertainty for Anomaly Detection [5.491655566898372]
We build a scalable machine learning system for unsupervised anomaly detection via representation learning.
We revisit VAE from the perspective of information theory to provide some theoretical foundations on using the reconstruction error.
We show empirically the competitive performance of our approach on benchmark datasets.
arXiv  Detail & Related papers  (2020-05-05T00:03:48Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.