Sampling - Variational Auto Encoder - Ensemble: In the Quest of
Explainable Artificial Intelligence
- URL: http://arxiv.org/abs/2309.14385v1
- Date: Mon, 25 Sep 2023 02:46:19 GMT
- Title: Sampling - Variational Auto Encoder - Ensemble: In the Quest of
Explainable Artificial Intelligence
- Authors: Sarit Maitra, Vivek Mishra, Pratima Verma, Manav Chopra, Priyanka Nath
- Abstract summary: This paper contributes to the discourse on XAI by presenting an empirical evaluation based on a novel framework.
It is a hybrid architecture where VAE combined with ensemble stacking and SHapley Additive exPlanations are used for imbalanced classification.
The finding reveals that combining ensemble stacking, VAE, and SHAP can. not only lead to better model performance but also provide an easily explainable framework.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable Artificial Intelligence (XAI) models have recently attracted a
great deal of interest from a variety of application sectors. Despite
significant developments in this area, there are still no standardized methods
or approaches for understanding AI model outputs. A systematic and cohesive
framework is also increasingly necessary to incorporate new techniques like
discriminative and generative models to close the gap. This paper contributes
to the discourse on XAI by presenting an empirical evaluation based on a novel
framework: Sampling - Variational Auto Encoder (VAE) - Ensemble Anomaly
Detection (SVEAD). It is a hybrid architecture where VAE combined with ensemble
stacking and SHapley Additive exPlanations are used for imbalanced
classification. The finding reveals that combining ensemble stacking, VAE, and
SHAP can. not only lead to better model performance but also provide an easily
explainable framework. This work has used SHAP combined with Permutation
Importance and Individual Conditional Expectations to create a powerful
interpretability of the model. The finding has an important implication in the
real world, where the need for XAI is paramount to boost confidence in AI
applications.
Related papers
- XAI-based Feature Ensemble for Enhanced Anomaly Detection in Autonomous Driving Systems [1.3022753212679383]
This paper proposes a novel feature ensemble framework that integrates multiple Explainable AI (XAI) methods.
By fusing top features identified by these XAI methods across six diverse AI models, the framework creates a robust and comprehensive set of features critical for detecting anomalies.
Our technique demonstrates improved accuracy, robustness, and transparency of AI models, contributing to safer and more trustworthy autonomous driving systems.
arXiv Detail & Related papers (2024-10-20T14:34:48Z) - SegXAL: Explainable Active Learning for Semantic Segmentation in Driving Scene Scenarios [1.2172320168050466]
We propose a novel Explainable Active Learning model, XAL-based semantic segmentation model "SegXAL"
SegXAL can (i) effectively utilize the unlabeled data, (ii) facilitate the "Human-in-the-loop" paradigm, and (iii) augment the model decisions in an interpretable way.
In particular, we investigate the application of the SegXAL model for semantic segmentation in driving scene scenarios.
arXiv Detail & Related papers (2024-08-08T14:19:11Z) - LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks [52.46420522934253]
We introduce LoRA-Ensemble, a parameter-efficient deep ensemble method for self-attention networks.
By employing a single pre-trained self-attention network with weights shared across all members, we train member-specific low-rank matrices for the attention projections.
Our method exhibits superior calibration compared to explicit ensembles and achieves similar or better accuracy across various prediction tasks and datasets.
arXiv Detail & Related papers (2024-05-23T11:10:32Z) - Symmetric Equilibrium Learning of VAEs [56.56929742714685]
We view variational autoencoders (VAEs) as decoder-encoder pairs, which map distributions in the data space to distributions in the latent space and vice versa.
We propose a Nash equilibrium learning approach, which is symmetric with respect to the encoder and decoder and allows learning VAEs in situations where both the data and the latent distributions are accessible only by sampling.
arXiv Detail & Related papers (2023-07-19T10:27:34Z) - Optimizing Explanations by Network Canonization and Hyperparameter
Search [74.76732413972005]
Rule-based and modified backpropagation XAI approaches often face challenges when being applied to modern model architectures.
Model canonization is the process of re-structuring the model to disregard problematic components without changing the underlying function.
In this work, we propose canonizations for currently relevant model blocks applicable to popular deep neural network architectures.
arXiv Detail & Related papers (2022-11-30T17:17:55Z) - TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT
Security [0.0]
We propose a universal XAI model named Transparency Relying Upon Statistical Theory (XAI)
We show how TRUST XAI provides explanations for new random samples with an average success rate of 98%.
In the end, we also show how TRUST is explained to the user.
arXiv Detail & Related papers (2022-05-02T21:44:27Z) - Interpretable pipelines with evolutionarily optimized modules for RL
tasks with visual inputs [5.254093731341154]
We propose end-to-end pipelines composed of multiple interpretable models co-optimized by means of evolutionary algorithms.
We test our approach in reinforcement learning environments from the Atari benchmark.
arXiv Detail & Related papers (2022-02-10T10:33:44Z) - Hierarchical Variational Autoencoder for Visual Counterfactuals [79.86967775454316]
Conditional Variational Autos (VAE) are gathering significant attention as an Explainable Artificial Intelligence (XAI) tool.
In this paper we show how relaxing the effect of the posterior leads to successful counterfactuals.
We introduce VAEX an Hierarchical VAE designed for this approach that can visually audit a classifier in applications.
arXiv Detail & Related papers (2021-02-01T14:07:11Z) - A Generic and Model-Agnostic Exemplar Synthetization Framework for
Explainable AI [29.243901669124515]
We focus on explainable AI and propose a novel generic and model-agnostic framework for synthesizing input exemplars.
We use a generative model, which acts as a prior for generating data, and traverse its latent space using a novel evolutionary strategy.
Our framework is model-agnostic, in the sense that the machine learning model that we aim to explain is a black-box.
arXiv Detail & Related papers (2020-06-06T15:46:48Z) - Improve Variational Autoencoder for Text Generationwith Discrete Latent
Bottleneck [52.08901549360262]
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
VAEs tend to ignore latent variables with a strong auto-regressive decoder.
We propose a principled approach to enforce an implicit latent feature matching in a more compact latent space.
arXiv Detail & Related papers (2020-04-22T14:41:37Z) - AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses [97.50616524350123]
We build dialogue models that are dynamically aware of what utterances or tokens are dull without any feature-engineering.
The first model, MinAvgOut, directly maximizes the diversity score through the output distributions of each batch.
The second model, Label Fine-Tuning (LFT), prepends to the source sequence a label continuously scaled by the diversity score to control the diversity level.
The third model, RL, adopts Reinforcement Learning and treats the diversity score as a reward signal.
arXiv Detail & Related papers (2020-01-15T18:32:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.