Biologically Inspired Oscillating Activation Functions Can Bridge the
Performance Gap between Biological and Artificial Neurons
- URL: http://arxiv.org/abs/2111.04020v4
- Date: Wed, 10 May 2023 04:38:25 GMT
- Title: Biologically Inspired Oscillating Activation Functions Can Bridge the
Performance Gap between Biological and Artificial Neurons
- Authors: Matthew Mithra Noel, Shubham Bharadwaj, Venkataraman
Muthiah-Nakarajan, Praneet Dutta, Geraldine Bessie Amali
- Abstract summary: This paper proposes four new oscillating activation functions inspired by human pyramidal neurons.
Oscillating activation functions are non-saturating for all inputs unlike popular activation functions.
Using oscillating activation functions instead of popular monotonic or non-monotonic single-zero activation functions enables neural networks to train faster and solve classification problems with fewer layers.
- Score: 2.362412515574206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent discovery of special human neocortical pyramidal neurons that can
individually learn the XOR function highlights the significant performance gap
between biological and artificial neurons. The output of these pyramidal
neurons first increases to a maximum with input and then decreases. Artificial
neurons with similar characteristics can be designed with oscillating
activation functions. Oscillating activation functions have multiple zeros
allowing single neurons to have multiple hyper-planes in their decision
boundary. This enables even single neurons to learn the XOR function. This
paper proposes four new oscillating activation functions inspired by human
pyramidal neurons that can also individually learn the XOR function.
Oscillating activation functions are non-saturating for all inputs unlike
popular activation functions, leading to improved gradient flow and faster
convergence. Using oscillating activation functions instead of popular
monotonic or non-monotonic single-zero activation functions enables neural
networks to train faster and solve classification problems with fewer layers.
An extensive comparison of 23 activation functions on CIFAR 10, CIFAR 100, and
Imagentte benchmarks is presented and the oscillating activation functions
proposed in this paper are shown to outperform all known popular activation
functions.
Related papers
- A Significantly Better Class of Activation Functions Than ReLU Like Activation Functions [0.8287206589886881]
This paper introduces a significantly better class of activation functions than the almost universally used ReLU like and Sigmoidal class of activation functions.
Two new activation functions referred to as the Cone and Parabolic-Cone that differ drastically from popular activation functions.
The results presented in this paper indicate that many nonlinear real-world datasets may be separated with fewer hyperstrips than half-spaces.
arXiv Detail & Related papers (2024-05-07T16:24:03Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - STL: A Signed and Truncated Logarithm Activation Function for Neural
Networks [5.9622541907827875]
Activation functions play an essential role in neural networks.
We present a novel signed and truncated logarithm function as activation function.
The suggested activation function can be applied in a large range of neural networks.
arXiv Detail & Related papers (2023-07-31T03:41:14Z) - Emergent Modularity in Pre-trained Transformers [127.08792763817496]
We consider two main characteristics of modularity: functional specialization of neurons and function-based neuron grouping.
We study how modularity emerges during pre-training, and find that the modular structure is stabilized at the early stage.
It suggests that Transformers first construct the modular structure and then learn fine-grained neuron functions.
arXiv Detail & Related papers (2023-05-28T11:02:32Z) - Nish: A Novel Negative Stimulated Hybrid Activation Function [5.482532589225552]
We propose a novel non-monotonic activation function called Negative Stimulated Hybrid Activation Function (Nish)
It behaves like a Rectified Linear Unit (ReLU) function for values greater than zero, and a sinus-sigmoidal function for values less than zero.
The proposed function incorporates the sigmoid and sine wave, allowing new dynamics over traditional ReLU activations.
arXiv Detail & Related papers (2022-10-17T13:32:52Z) - An Adiabatic Capacitive Artificial Neuron with RRAM-based Threshold
Detection for Energy-Efficient Neuromorphic Computing [62.997667081978825]
We present an artificial neuron featuring adiabatic synapse capacitors to produce membrane potentials for the somas of neurons.
Our initial 4-bit adiabatic capacitive neuron proof-of-concept example shows 90% synaptic energy saving.
arXiv Detail & Related papers (2022-02-02T17:12:22Z) - Growing Cosine Unit: A Novel Oscillatory Activation Function That Can
Speedup Training and Reduce Parameters in Convolutional Neural Networks [0.1529342790344802]
Convolution neural networks have been successful in solving many socially important and economically significant problems.
Key discovery that made training deep networks feasible was the adoption of the Rectified Linear Unit (ReLU) activation function.
New activation function C(z) = z cos z outperforms Sigmoids, Swish, Mish and ReLU on a variety of architectures.
arXiv Detail & Related papers (2021-08-30T01:07:05Z) - Evolution of Activation Functions: An Empirical Investigation [0.30458514384586394]
This work presents an evolutionary algorithm to automate the search for completely new activation functions.
We compare these new evolved activation functions to other existing and commonly used activation functions.
arXiv Detail & Related papers (2021-05-30T20:08:20Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning [59.249322621035056]
We propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks.
In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented.
Our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied.
arXiv Detail & Related papers (2020-05-02T06:41:20Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.