Integration of Leaky-Integrate-and-Fire-Neurons in Deep Learning
Architectures
- URL: http://arxiv.org/abs/2004.13532v2
- Date: Fri, 11 Sep 2020 13:27:06 GMT
- Title: Integration of Leaky-Integrate-and-Fire-Neurons in Deep Learning
Architectures
- Authors: Richard C. Gerum, Achim Schilling
- Abstract summary: We show that biologically inspired neuron models provide novel and efficient ways of information encoding.
We derived simple update-rules for the LIF units from the differential equations, which are easy to numerically integrate.
We apply our method to the IRIS blossoms image data set and show that the training technique can be used to train LIF neurons on image classification tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Up to now, modern Machine Learning is mainly based on fitting high
dimensional functions to enormous data sets, taking advantage of huge hardware
resources. We show that biologically inspired neuron models such as the
Leaky-Integrate-and-Fire (LIF) neurons provide novel and efficient ways of
information encoding. They can be integrated in Machine Learning models, and
are a potential target to improve Machine Learning performance.
Thus, we derived simple update-rules for the LIF units from the differential
equations, which are easy to numerically integrate. We apply a novel approach
to train the LIF units supervisedly via backpropagation, by assigning a
constant value to the derivative of the neuron activation function exclusively
for the backpropagation step. This simple mathematical trick helps to
distribute the error between the neurons of the pre-connected layer. We apply
our method to the IRIS blossoms image data set and show that the training
technique can be used to train LIF neurons on image classification tasks.
Furthermore, we show how to integrate our method in the KERAS (tensorflow)
framework and efficiently run it on GPUs. To generate a deeper understanding of
the mechanisms during training we developed interactive illustrations, which we
provide online.
With this study we want to contribute to the current efforts to enhance
Machine Intelligence by integrating principles from biology.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Learning-to-learn enables rapid learning with phase-change memory-based in-memory computing [38.34244217803562]
A growing demand for low-power, autonomously learning artificial intelligence (AI) systems can be applied at the edge and rapidly adapt to the specific situation at deployment site.
In this work, we pair L2L with in-memory computing neuromorphic hardware to build efficient AI models that can rapidly adapt to new tasks.
We demonstrate the versatility of our approach in two scenarios: a convolutional neural network performing image classification and a biologically-inspired spiking neural network generating motor commands for a real robotic arm.
arXiv Detail & Related papers (2024-04-22T15:03:46Z) - Demolition and Reinforcement of Memories in Spin-Glass-like Neural
Networks [0.0]
The aim of this thesis is to understand the effectiveness of Unlearning in both associative memory models and generative models.
The selection of structured data enables an associative memory model to retrieve concepts as attractors of a neural dynamics with considerable basins of attraction.
A novel regularization technique for Boltzmann Machines is presented, proving to outperform previously developed methods in learning hidden probability distributions from data-sets.
arXiv Detail & Related papers (2024-03-04T23:12:42Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Gradient Descent in Materio [3.756477173839499]
We show an efficient and accurate homodyne gradient extraction method for performing gradient descent on the loss function directly in the material system.
This shows that gradient descent can in principle be fully implemented in materio using simple electronics.
arXiv Detail & Related papers (2021-05-15T12:18:31Z) - AutoInt: Automatic Integration for Fast Neural Volume Rendering [51.46232518888791]
We propose a new framework for learning efficient, closed-form solutions to integrals using implicit neural representation networks.
We demonstrate a greater than 10x improvement in photorealistic requirements, enabling fast neural volume rendering.
arXiv Detail & Related papers (2020-12-03T05:46:10Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - Reward Propagation Using Graph Convolutional Networks [61.32891095232801]
We propose a new framework for learning potential functions by leveraging ideas from graph representation learning.
Our approach relies on Graph Convolutional Networks which we use as a key ingredient in combination with the probabilistic inference view of reinforcement learning.
arXiv Detail & Related papers (2020-10-06T04:38:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.