DeepCSHAP: Utilizing Shapley Values to Explain Deep Complex-Valued
Neural Networks
- URL: http://arxiv.org/abs/2403.08428v1
- Date: Wed, 13 Mar 2024 11:26:43 GMT
- Title: DeepCSHAP: Utilizing Shapley Values to Explain Deep Complex-Valued
Neural Networks
- Authors: Florian Eilers and Xiaoyi Jiang
- Abstract summary: Deep Neural Networks are widely used in academy as well as corporate and public applications.
The ability to explain their output is critical for safety reasons as well as acceptance among applicants.
We present four gradient based explanation methods suitable for use in complex-valued neural networks.
- Score: 7.4841568561701095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks are widely used in academy as well as corporate and
public applications, including safety critical applications such as health care
and autonomous driving. The ability to explain their output is critical for
safety reasons as well as acceptance among applicants. A multitude of methods
have been proposed to explain real-valued neural networks. Recently,
complex-valued neural networks have emerged as a new class of neural networks
dealing with complex-valued input data without the necessity of projecting them
onto $\mathbb{R}^2$. This brings up the need to develop explanation algorithms
for this kind of neural networks. In this paper we provide these developments.
While we focus on adapting the widely used DeepSHAP algorithm to the complex
domain, we also present versions of four gradient based explanation methods
suitable for use in complex-valued neural networks. We evaluate the explanation
quality of all presented algorithms and provide all of them as an open source
library adaptable to most recent complex-valued neural network architectures.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - When Deep Learning Meets Polyhedral Theory: A Survey [6.899761345257773]
In the past decade, deep became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural learning.
Meanwhile, the structure of neural networks converged back to simplerwise and linear functions.
arXiv Detail & Related papers (2023-04-29T11:46:53Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - How and what to learn:The modes of machine learning [7.085027463060304]
We propose a new approach, namely the weight pathway analysis (WPA), to study the mechanism of multilayer neural networks.
WPA shows that a neural network stores and utilizes information in a "holographic" way, that is, the network encodes all training samples in a coherent structure.
It is found that hidden-layer neurons self-organize into different classes in the later stages of the learning process.
arXiv Detail & Related papers (2022-02-28T14:39:06Z) - Information Flow in Deep Neural Networks [0.6922389632860545]
There is no comprehensive theoretical understanding of how deep neural networks work or are structured.
Deep networks are often seen as black boxes with unclear interpretations and reliability.
This work aims to apply principles and techniques from information theory to deep learning models to increase our theoretical understanding and design better algorithms.
arXiv Detail & Related papers (2022-02-10T23:32:26Z) - Neural Network Quantization for Efficient Inference: A Survey [0.0]
Neural network quantization has recently arisen to meet this demand of reducing the size and complexity of neural networks.
This paper surveys the many neural network quantization techniques that have been developed in the last decade.
arXiv Detail & Related papers (2021-12-08T22:49:39Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Towards Understanding Hierarchical Learning: Benefits of Neural
Representations [160.33479656108926]
In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks.
We show that neural representation can achieve improved sample complexities compared with the raw input.
Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
arXiv Detail & Related papers (2020-06-24T02:44:54Z) - Neural Rule Ensembles: Encoding Sparse Feature Interactions into Neural
Networks [3.7277730514654555]
We use decision trees to capture relevant features and their interactions and define a mapping to encode extracted relationships into a neural network.
At the same time through feature selection it enables learning of compact representations compared to state of the art tree-based approaches.
arXiv Detail & Related papers (2020-02-11T11:22:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.