Coalitional Bayesian Autoencoders -- Towards explainable unsupervised
deep learning
- URL: http://arxiv.org/abs/2110.10038v1
- Date: Tue, 19 Oct 2021 15:07:09 GMT
- Title: Coalitional Bayesian Autoencoders -- Towards explainable unsupervised
deep learning
- Authors: Bang Xiang Yong and Alexandra Brintrup
- Abstract summary: We show that explanations of BAE's predictions suffer from high correlation resulting in misleading explanations.
To alleviate this, a "Coalitional BAE" is proposed, which is inspired by agent-based system theory.
Our experiments on publicly available condition monitoring datasets demonstrate the improved quality of explanations using the Coalitional BAE.
- Score: 78.60415450507706
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper aims to improve the explainability of Autoencoder's (AE)
predictions by proposing two explanation methods based on the mean and
epistemic uncertainty of log-likelihood estimate, which naturally arise from
the probabilistic formulation of the AE called Bayesian Autoencoders (BAE). To
quantitatively evaluate the performance of explanation methods, we test them in
sensor network applications, and propose three metrics based on covariate shift
of sensors : (1) G-mean of Spearman drift coefficients, (2) G-mean of
sensitivity-specificity of explanation ranking and (3) sensor explanation
quality index (SEQI) which combines the two aforementioned metrics.
Surprisingly, we find that explanations of BAE's predictions suffer from high
correlation resulting in misleading explanations. To alleviate this, a
"Coalitional BAE" is proposed, which is inspired by agent-based system theory.
Our comprehensive experiments on publicly available condition monitoring
datasets demonstrate the improved quality of explanations using the Coalitional
BAE.
Related papers
- Uncertainty Quantification for Gradient-based Explanations in Neural Networks [6.9060054915724]
We propose a pipeline to ascertain the explanation uncertainty of neural networks.
We use this pipeline to produce explanation distributions for the CIFAR-10, FER+, and California Housing datasets.
We compute modified pixel insertion/deletion metrics to evaluate the quality of the generated explanations.
arXiv Detail & Related papers (2024-03-25T21:56:02Z) - Anchoring Path for Inductive Relation Prediction in Knowledge Graphs [69.81600732388182]
APST takes both APs and CPs as the inputs of a unified Sentence Transformer architecture.
We evaluate APST on three public datasets and achieve state-of-the-art (SOTA) performance in 30 of 36 transductive, inductive, and few-shot experimental settings.
arXiv Detail & Related papers (2023-12-21T06:02:25Z) - RDR: the Recap, Deliberate, and Respond Method for Enhanced Language
Understanding [6.738409533239947]
The Recap, Deliberate, and Respond (RDR) paradigm addresses this issue by incorporating three distinct objectives within the neural network pipeline.
By cascading these three models, we mitigate the potential for gaming the benchmark and establish a robust method for capturing the underlying semantic patterns.
Our results demonstrate improved performance compared to competitive baselines, with an enhancement of up to 2% on standard metrics.
arXiv Detail & Related papers (2023-12-15T16:41:48Z) - Do Bayesian Variational Autoencoders Know What They Don't Know? [0.6091702876917279]
The problem of detecting the Out-of-Distribution (OoD) inputs is paramount importance for Deep Neural Networks.
It has been previously shown that even Deep Generative Models that allow estimating the density of the inputs may not be reliable.
This paper investigates three approaches to inference: Markov chain Monte Carlo, Bayes gradient by Backpropagation and Weight Averaging-Gaussian.
arXiv Detail & Related papers (2022-12-29T11:48:01Z) - Explainer Divergence Scores (EDS): Some Post-Hoc Explanations May be
Effective for Detecting Unknown Spurious Correlations [4.223964614888875]
Post-hoc explainers might be ineffective for detecting spurious correlations in Deep Neural Networks (DNNs)
We show there are serious weaknesses with the existing evaluation frameworks for this setting.
We propose a new evaluation methodology, Explainer Divergence Scores (EDS), grounded in an information theory approach to evaluate explainers.
arXiv Detail & Related papers (2022-11-14T15:52:21Z) - The Unreliability of Explanations in Few-Shot In-Context Learning [50.77996380021221]
We focus on two NLP tasks that involve reasoning over text, namely question answering and natural language inference.
We show that explanations judged as good by humans--those that are logically consistent with the input--usually indicate more accurate predictions.
We present a framework for calibrating model predictions based on the reliability of the explanations.
arXiv Detail & Related papers (2022-05-06T17:57:58Z) - ESAD: End-to-end Deep Semi-supervised Anomaly Detection [85.81138474858197]
We propose a new objective function that measures the KL-divergence between normal and anomalous data.
The proposed method significantly outperforms several state-of-the-arts on multiple benchmark datasets.
arXiv Detail & Related papers (2020-12-09T08:16:35Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - Pairwise Supervised Hashing with Bernoulli Variational Auto-Encoder and
Self-Control Gradient Estimator [62.26981903551382]
Variational auto-encoders (VAEs) with binary latent variables provide state-of-the-art performance in terms of precision for document retrieval.
We propose a pairwise loss function with discrete latent VAE to reward within-class similarity and between-class dissimilarity for supervised hashing.
This new semantic hashing framework achieves superior performance compared to the state-of-the-arts.
arXiv Detail & Related papers (2020-05-21T06:11:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.