Transparent Anomaly Detection via Concept-based Explanations
- URL: http://arxiv.org/abs/2310.10702v2
- Date: Wed, 1 Nov 2023 09:56:52 GMT
- Title: Transparent Anomaly Detection via Concept-based Explanations
- Authors: Laya Rafiee Sevyeri, Ivaxi Sheth, Farhood Farahnak, Samira Ebrahimi
Kahou, Shirin Abbasinejad Enger
- Abstract summary: We propose Transparent Anomaly Detection Concept Explanations (ACE) for anomaly detection.
ACE provides human interpretable explanations in the form of concepts along with anomaly prediction.
Our proposed model shows either higher or comparable results to black-box uninterpretable models.
- Score: 4.3900160011634055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advancements in deep learning techniques have given a boost to the
performance of anomaly detection. However, real-world and safety-critical
applications demand a level of transparency and reasoning beyond accuracy. The
task of anomaly detection (AD) focuses on finding whether a given sample
follows the learned distribution. Existing methods lack the ability to reason
with clear explanations for their outcomes. Hence to overcome this challenge,
we propose Transparent {A}nomaly Detection {C}oncept {E}xplanations (ACE). ACE
is able to provide human interpretable explanations in the form of concepts
along with anomaly prediction. To the best of our knowledge, this is the first
paper that proposes interpretable by-design anomaly detection. In addition to
promoting transparency in AD, it allows for effective human-model interaction.
Our proposed model shows either higher or comparable results to black-box
uninterpretable models. We validate the performance of ACE across three
realistic datasets - bird classification on CUB-200-2011, challenging
histopathology slide image classification on TIL-WSI-TCGA, and gender
classification on CelebA. We further demonstrate that our concept learning
paradigm can be seamlessly integrated with other classification-based AD
methods.
Related papers
- Fine-grained Abnormality Prompt Learning for Zero-shot Anomaly Detection [88.34095233600719]
FAPrompt is a novel framework designed to learn Fine-grained Abnormality Prompts for more accurate ZSAD.
It substantially outperforms state-of-the-art methods by at least 3%-5% AUC/AP in both image- and pixel-level ZSAD tasks.
arXiv Detail & Related papers (2024-10-14T08:41:31Z) - Generative Edge Detection with Stable Diffusion [52.870631376660924]
Edge detection is typically viewed as a pixel-level classification problem mainly addressed by discriminative methods.
We propose a novel approach, named Generative Edge Detector (GED), by fully utilizing the potential of the pre-trained stable diffusion model.
We conduct extensive experiments on multiple datasets and achieve competitive performance.
arXiv Detail & Related papers (2024-10-04T01:52:23Z) - Explainable Image Recognition via Enhanced Slot-attention Based Classifier [28.259040737540797]
We introduce ESCOUTER, a visually explainable classifier based on the modified slot attention mechanism.
ESCOUTER distinguishes itself by not only delivering high classification accuracy but also offering more transparent insights into the reasoning behind its decisions.
A novel loss function specifically for ESCOUTER is designed to fine-tune the model's behavior, enabling it to toggle between positive and negative explanations.
arXiv Detail & Related papers (2024-07-08T05:05:43Z) - Bridging Generative and Discriminative Models for Unified Visual
Perception with Diffusion Priors [56.82596340418697]
We propose a simple yet effective framework comprising a pre-trained Stable Diffusion (SD) model containing rich generative priors, a unified head (U-head) capable of integrating hierarchical representations, and an adapted expert providing discriminative priors.
Comprehensive investigations unveil potential characteristics of Vermouth, such as varying granularity of perception concealed in latent variables at distinct time steps and various U-net stages.
The promising results demonstrate the potential of diffusion models as formidable learners, establishing their significance in furnishing informative and robust visual representations.
arXiv Detail & Related papers (2024-01-29T10:36:57Z) - Unsupervised Discovery of Interpretable Directions in h-space of
Pre-trained Diffusion Models [63.1637853118899]
We propose the first unsupervised and learning-based method to identify interpretable directions in h-space of pre-trained diffusion models.
We employ a shift control module that works on h-space of pre-trained diffusion models to manipulate a sample into a shifted version of itself.
By jointly optimizing them, the model will spontaneously discover disentangled and interpretable directions.
arXiv Detail & Related papers (2023-10-15T18:44:30Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - CamoDiffusion: Camouflaged Object Detection via Conditional Diffusion
Models [72.93652777646233]
Camouflaged Object Detection (COD) is a challenging task in computer vision due to the high similarity between camouflaged objects and their surroundings.
We propose a new paradigm that treats COD as a conditional mask-generation task leveraging diffusion models.
Our method, dubbed CamoDiffusion, employs the denoising process of diffusion models to iteratively reduce the noise of the mask.
arXiv Detail & Related papers (2023-05-29T07:49:44Z) - Explanation Method for Anomaly Detection on Mixed Numerical and
Categorical Spaces [0.9543943371833464]
We present EADMNC (Explainable Anomaly Detection on Mixed Numerical and Categorical spaces)
It adds explainability to the predictions obtained with the original model.
We report experimental results on extensive real-world data, particularly in the domain of network intrusion detection.
arXiv Detail & Related papers (2022-09-09T08:20:13Z) - TDLS: A Top-Down Layer Searching Algorithm for Generating Counterfactual
Visual Explanation [4.4553061479339995]
We adapt counterfactual explanation over fine-grained image classification problem.
We have proved that our TDLS algorithm could provide more flexible counterfactual visual explanation.
At the end, we discussed several applicable scenarios of counterfactual visual explanations.
arXiv Detail & Related papers (2021-08-08T15:27:14Z) - DISSECT: Disentangled Simultaneous Explanations via Concept Traversals [33.65478845353047]
DISSECT is a novel approach to explaining deep learning model inferences.
By training a generative model from a classifier's signal, DISSECT offers a way to discover a classifier's inherent "notion" of distinct concepts.
We show that DISSECT produces CTs that disentangle several concepts and are coupled to its reasoning due to joint training.
arXiv Detail & Related papers (2021-05-31T17:11:56Z) - On Generating Plausible Counterfactual and Semi-Factual Explanations for
Deep Learning [15.965337956587373]
PlausIble Exceptionality-based Contrastive Explanations (PIECE), modifies all exceptional features in a test image to be normal from the perspective of the counterfactual class.
Two controlled experiments compare PIECE to others in the literature, showing that PIECE not only generates the most plausible counterfactuals on several measures, but also the best semifactuals.
arXiv Detail & Related papers (2020-09-10T14:48:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.