Explainable Deep Belief Network based Auto encoder using novel Extended
Garson Algorithm
- URL: http://arxiv.org/abs/2207.08501v1
- Date: Mon, 18 Jul 2022 10:44:02 GMT
- Title: Explainable Deep Belief Network based Auto encoder using novel Extended
Garson Algorithm
- Authors: Satyam Kumar and Vadlamani Ravi
- Abstract summary: We develop an algorithm to explain Deep Belief Network based Auto-encoder (DBNA)
It is used to determine the contribution of each input feature in the DBN.
Important features identified by this method are compared against those obtained by Wald chi square (chi2)
- Score: 6.228766191647919
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The most difficult task in machine learning is to interpret trained shallow
neural networks. Deep neural networks (DNNs) provide impressive results on a
larger number of tasks, but it is generally still unclear how decisions are
made by such a trained deep neural network. Providing feature importance is the
most important and popular interpretation technique used in shallow and deep
neural networks. In this paper, we develop an algorithm extending the idea of
Garson Algorithm to explain Deep Belief Network based Auto-encoder (DBNA). It
is used to determine the contribution of each input feature in the DBN. It can
be used for any kind of neural network with many hidden layers. The
effectiveness of this method is tested on both classification and regression
datasets taken from literature. Important features identified by this method
are compared against those obtained by Wald chi square (\c{hi}2). For 2 out of
4 classification datasets and 2 out of 5 regression datasets, our proposed
methodology resulted in the identification of better-quality features leading
to statistically more significant results vis-\`a-vis Wald \c{hi}2.
Related papers
- Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - A Hierarchical Fused Quantum Fuzzy Neural Network for Image Classification [8.7057403071943]
We proposed a novel hierarchical fused quantum fuzzy neural network (HQFNN)
HQFNN uses quantum neural networks to learn fuzzy membership functions in fuzzy neural network.
Results show that the proposed model can outperform several existing methods.
arXiv Detail & Related papers (2024-03-14T12:09:36Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Wide and Deep Neural Networks Achieve Optimality for Classification [23.738242876364865]
We identify and construct an explicit set of neural network classifiers that achieve optimality.
In particular, we provide explicit activation functions that can be used to construct networks that achieve optimality.
Our results highlight the benefit of using deep networks for classification tasks, in contrast to regression tasks, where excessive depth is harmful.
arXiv Detail & Related papers (2022-04-29T14:27:42Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - End-to-End Learning of Deep Kernel Acquisition Functions for Bayesian
Optimization [39.56814839510978]
We propose a meta-learning method for Bayesian optimization with neural network-based kernels.
Our model is trained by a reinforcement learning framework from multiple tasks.
In experiments using three text document datasets, we demonstrate that the proposed method achieves better BO performance than the existing methods.
arXiv Detail & Related papers (2021-11-01T00:42:31Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - The Yin-Yang dataset [0.0]
Yin-Yang dataset was developed for research on biologically plausible error backpropagation and deep learning in spiking neural networks.
It serves as an alternative to classic deep learning datasets, by providing several advantages.
arXiv Detail & Related papers (2021-02-16T15:18:05Z) - Theoretical Analysis of the Advantage of Deepening Neural Networks [0.0]
It is important to know the expressivity of functions computable by deep neural networks.
By the two criteria, we show that to increase layers is more effective than to increase units at each layer on improving the expressivity of deep neural networks.
arXiv Detail & Related papers (2020-09-24T04:10:50Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.