Training of Deep Learning Neuro-Skin Neural Network
- URL: http://arxiv.org/abs/2007.04796v1
- Date: Fri, 3 Jul 2020 18:51:45 GMT
- Title: Training of Deep Learning Neuro-Skin Neural Network
- Authors: Mehrdad Shafiei Dizaji
- Abstract summary: Deep Learning Neuro-Skin Neural Network is a new type of neural network presented recently by the authors.
A neuroskin is modelled using finite elements. Each element of the finite element represents a cell.
It is shown that while the neuroskin can not present the desirable response, it improves gradually to the desired level.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this brief paper, a learning algorithm is developed for Deep Learning
Neuro-Skin Neural Network to improve their learning properties. Neuroskin is a
new type of neural network presented recently by the authors. It is comprised
of a cellular membrane which has a neuron attached to each cell. The neuron is
the cells nucleus. A neuroskin is modelled using finite elements. Each element
of the finite element represents a cell. Each cells neuron has dendritic fibers
which connects it to the nodes of the cell. On the other hand, its axon is
connected to the nodes of a number of different neurons. The neuroskin is
trained to contract upon receiving an input. The learning takes place during
updating iterations using sensitivity analysis. It is shown that while the
neuroskin can not present the desirable response, it improves gradually to the
desired level.
Related papers
- Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Towards a Foundation Model for Brain Age Prediction using coVariance
Neural Networks [102.75954614946258]
Increasing brain age with respect to chronological age can reflect increased vulnerability to neurodegeneration and cognitive decline.
NeuroVNN is pre-trained as a regression model on healthy population to predict chronological age.
NeuroVNN adds anatomical interpretability to brain age and has a scale-free' characteristic that allows its transference to datasets curated according to any arbitrary brain atlas.
arXiv Detail & Related papers (2024-02-12T14:46:31Z) - Joint Learning Neuronal Skeleton and Brain Circuit Topology with Permutation Invariant Encoders for Neuron Classification [33.47541392305739]
We propose NeuNet framework that combines morphological information of neurons obtained from skeleton and topological information between neurons obtained from neural circuit.
We reprocess and release two new datasets for neuron classification task from volume electron microscopy(VEM) images of human brain cortex and Drosophila brain.
arXiv Detail & Related papers (2023-12-22T08:31:11Z) - A Hybrid Training Algorithm for Continuum Deep Learning Neuro-Skin
Neural Network [0.0]
Deep Learning NeuroSkin Neural Network is a new type of neural network presented recently by the authors.
A neuroskin is modelled using finite elements. Each element of the finite element represents a cell.
It is shown that while the neuroskin cannot present the desirable response, it improves gradually to the desired level.
arXiv Detail & Related papers (2023-02-03T15:54:06Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Deep neural networks as nested dynamical systems [0.0]
An analogy is often made between deep neural networks and actual brains, suggested by the nomenclature itself.
This article makes the case that the analogy should be different.
Since the "neurons" in deep neural networks are managing the changing weights, they are more akin to the synapses in the brain.
arXiv Detail & Related papers (2021-11-01T23:37:54Z) - SeReNe: Sensitivity based Regularization of Neurons for Structured
Sparsity in Neural Networks [13.60023740064471]
SeReNe is a method for learning sparse topologies with a structure.
We define the sensitivity of a neuron as the variation of the network output.
By including the neuron sensitivity in the cost function as a regularization term, we areable to prune neurons with low sensitivity.
arXiv Detail & Related papers (2021-02-07T10:53:30Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z) - Learning Neural Activations [2.842794675894731]
We explore what happens when the activation function of each neuron in an artificial neural network is learned from data alone.
This is achieved by modelling the activation function of each neuron as a small neural network whose weights are shared by all neurons in the original network.
arXiv Detail & Related papers (2019-12-27T15:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.