On Interpretability of Artificial Neural Networks: A Survey
- URL: http://arxiv.org/abs/2001.02522v4
- Date: Mon, 27 Sep 2021 19:44:24 GMT
- Title: On Interpretability of Artificial Neural Networks: A Survey
- Authors: Fenglei Fan, Jinjun Xiong, Mengzhou Li, and Ge Wang
- Abstract summary: We systematically review recent studies in understanding the mechanism of neural networks, describe applications of interpretability especially in medicine.
We discuss future directions of interpretability research, such as in relation to fuzzy logic and brain science.
- Score: 21.905647127437685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning as represented by the artificial deep neural networks (DNNs)
has achieved great success in many important areas that deal with text, images,
videos, graphs, and so on. However, the black-box nature of DNNs has become one
of the primary obstacles for their wide acceptance in mission-critical
applications such as medical diagnosis and therapy. Due to the huge potential
of deep learning, interpreting neural networks has recently attracted much
research attention. In this paper, based on our comprehensive taxonomy, we
systematically review recent studies in understanding the mechanism of neural
networks, describe applications of interpretability especially in medicine, and
discuss future directions of interpretability research, such as in relation to
fuzzy logic and brain science.
Related papers
- Explaining Deep Neural Networks by Leveraging Intrinsic Methods [0.9790236766474201]
This thesis contributes to the field of eXplainable AI, focusing on enhancing the interpretability of deep neural networks.
The core contributions lie in introducing novel techniques aimed at making these networks more interpretable by leveraging an analysis of their inner workings.
Secondly, this research delves into novel investigations on neurons within trained deep neural networks, shedding light on overlooked phenomena related to their activation values.
arXiv Detail & Related papers (2024-07-17T01:20:17Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Automated Natural Language Explanation of Deep Visual Neurons with Large
Models [43.178568768100305]
This paper proposes a novel post-hoc framework for generating semantic explanations of neurons with large foundation models.
Our framework is designed to be compatible with various model architectures and datasets, automated and scalable neuron interpretation.
arXiv Detail & Related papers (2023-10-16T17:04:51Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Neuro-Symbolic Learning: Principles and Applications in Ophthalmology [20.693460748187906]
The neuro-symbolic learning (NeSyL) notion incorporates aspects of symbolic representation and bringing common sense into neural networks (NeSyL)
NeSyL has shown promising outcomes in domains where interpretability, reasoning, and explainability are crucial, such as video and image captioning, question-answering and reasoning, health informatics, and genomics.
This review presents a comprehensive survey on the state-of-the-art NeSyL approaches, their principles, advances in machine and deep learning algorithms, applications such as opthalmology, and most importantly, future perspectives of this emerging field.
arXiv Detail & Related papers (2022-07-31T06:48:19Z) - Interpretability of Neural Network With Physiological Mechanisms [5.1971653175509145]
Deep learning continues to play as a powerful state-of-art technique that has achieved extraordinary accuracy levels in various domains of regression and classification tasks.
The original goal of proposing the neural network model is to improve the understanding of complex human brains using a mathematical expression approach.
Recent deep learning techniques continue to lose the interpretations of its functional process by being treated mostly as a black-box approximator.
arXiv Detail & Related papers (2022-03-24T21:40:04Z) - Deep Reinforcement Learning Guided Graph Neural Networks for Brain
Network Analysis [61.53545734991802]
We propose a novel brain network representation framework, namely BN-GNN, which searches for the optimal GNN architecture for each brain network.
Our proposed BN-GNN improves the performance of traditional GNNs on different brain network analysis tasks.
arXiv Detail & Related papers (2022-03-18T07:05:27Z) - Mathematical Models of Overparameterized Neural Networks [25.329225766892126]
We will focus on the analysis of two-layer neural networks, and explain the key mathematical models.
We will then discuss challenges in understanding deep neural networks and some current research directions.
arXiv Detail & Related papers (2020-12-27T17:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.