Learning Neural Activations
- URL: http://arxiv.org/abs/1912.12187v1
- Date: Fri, 27 Dec 2019 15:52:07 GMT
- Title: Learning Neural Activations
- Authors: Fayyaz ul Amir Afsar Minhas and Amina Asif
- Abstract summary: We explore what happens when the activation function of each neuron in an artificial neural network is learned from data alone.
This is achieved by modelling the activation function of each neuron as a small neural network whose weights are shared by all neurons in the original network.
- Score: 2.842794675894731
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An artificial neuron is modelled as a weighted summation followed by an
activation function which determines its output. A wide variety of activation
functions such as rectified linear units (ReLU), leaky-ReLU, Swish, MISH, etc.
have been explored in the literature. In this short paper, we explore what
happens when the activation function of each neuron in an artificial neural
network is learned natively from data alone. This is achieved by modelling the
activation function of each neuron as a small neural network whose weights are
shared by all neurons in the original network. We list our primary findings in
the conclusions section. The code for our analysis is available at:
https://github.com/amina01/Learning-Neural-Activations.
Related papers
- Interpreting the Second-Order Effects of Neurons in CLIP [73.54377859089801]
We interpret the function of individual neurons in CLIP by automatically describing them using text.
We present the "second-order lens", analyzing the effect flowing from a neuron through the later attention heads, directly to the output.
Our results indicate that a scalable understanding of neurons can be used for model deception and for introducing new model capabilities.
arXiv Detail & Related papers (2024-06-06T17:59:52Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - A Hybrid Training Algorithm for Continuum Deep Learning Neuro-Skin
Neural Network [0.0]
Deep Learning NeuroSkin Neural Network is a new type of neural network presented recently by the authors.
A neuroskin is modelled using finite elements. Each element of the finite element represents a cell.
It is shown that while the neuroskin cannot present the desirable response, it improves gradually to the desired level.
arXiv Detail & Related papers (2023-02-03T15:54:06Z) - A survey on recently proposed activation functions for Deep Learning [0.0]
This survey discusses the main concepts of activation functions in neural networks.
It includes a brief introduction to deep neural networks, a summary of what are activation functions and how they are used in neural networks, their most common properties, the different types of activation functions, some of the challenges, limitations, and alternative solutions faced by activation functions.
arXiv Detail & Related papers (2022-04-06T16:21:52Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Activation Functions in Artificial Neural Networks: A Systematic
Overview [0.3553493344868413]
Activation functions shape the outputs of artificial neurons.
This paper provides an analytic yet up-to-date overview of popular activation functions and their properties.
arXiv Detail & Related papers (2021-01-25T08:55:26Z) - Training of Deep Learning Neuro-Skin Neural Network [0.0]
Deep Learning Neuro-Skin Neural Network is a new type of neural network presented recently by the authors.
A neuroskin is modelled using finite elements. Each element of the finite element represents a cell.
It is shown that while the neuroskin can not present the desirable response, it improves gradually to the desired level.
arXiv Detail & Related papers (2020-07-03T18:51:45Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.