Under the Hood of Neural Networks: Characterizing Learned
Representations by Functional Neuron Populations and Network Ablations
- URL: http://arxiv.org/abs/2004.01254v2
- Date: Mon, 11 May 2020 09:09:15 GMT
- Title: Under the Hood of Neural Networks: Characterizing Learned
Representations by Functional Neuron Populations and Network Ablations
- Authors: Richard Meyes, Constantin Waubert de Puiseau, Andres Posada-Moreno,
Tobias Meisen
- Abstract summary: We shed light on the roles of single neurons and groups of neurons within the network fulfilling a learned task.
We find that neither a neuron's magnitude or selectivity of activation, nor its impact on network performance are sufficient stand-alone indicators.
- Score: 0.3441021278275805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The need for more transparency of the decision-making processes in artificial
neural networks steadily increases driven by their applications in safety
critical and ethically challenging domains such as autonomous driving or
medical diagnostics. We address today's lack of transparency of neural networks
and shed light on the roles of single neurons and groups of neurons within the
network fulfilling a learned task. Inspired by research in the field of
neuroscience, we characterize the learned representations by activation
patterns and network ablations, revealing functional neuron populations that a)
act jointly in response to specific stimuli or b) have similar impact on the
network's performance after being ablated. We find that neither a neuron's
magnitude or selectivity of activation, nor its impact on network performance
are sufficient stand-alone indicators for its importance for the overall task.
We argue that such indicators are essential for future advances in transfer
learning and modern neuroscience.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Automated Natural Language Explanation of Deep Visual Neurons with Large
Models [43.178568768100305]
This paper proposes a novel post-hoc framework for generating semantic explanations of neurons with large foundation models.
Our framework is designed to be compatible with various model architectures and datasets, automated and scalable neuron interpretation.
arXiv Detail & Related papers (2023-10-16T17:04:51Z) - Expressivity of Spiking Neural Networks [15.181458163440634]
We study the capabilities of spiking neural networks where information is encoded in the firing time of neurons.
In contrast to ReLU networks, we prove that spiking neural networks can realize both continuous and discontinuous functions.
arXiv Detail & Related papers (2023-08-16T08:45:53Z) - Mitigating Communication Costs in Neural Networks: The Role of Dendritic
Nonlinearity [28.243134476634125]
In this study, we scrutinized the importance of nonlinear dendrites within neural networks.
Our findings reveal that integrating dendritic structures can substantially enhance model capacity and performance.
arXiv Detail & Related papers (2023-06-21T00:28:20Z) - Learning to Act through Evolution of Neural Diversity in Random Neural
Networks [9.387749254963595]
In most artificial neural networks (ANNs), neural computation is abstracted to an activation function that is usually shared between all neurons.
We propose the optimization of neuro-centric parameters to attain a set of diverse neurons that can perform complex computations.
arXiv Detail & Related papers (2023-05-25T11:33:04Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - How Do You Act? An Empirical Study to Understand Behavior of Deep
Reinforcement Learning Agents [2.3268634502937937]
The demand for more transparency of decision-making processes of deep reinforcement learning agents is greater than ever.
In this study, we characterize the learned representations of an agent's policy network through its activation space.
We show that the healthy agent's behavior is characterized by a distinct correlation pattern between the network's layer activation and the performed actions.
arXiv Detail & Related papers (2020-04-07T10:08:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.