Deep neural networks as nested dynamical systems
- URL: http://arxiv.org/abs/2111.01297v1
- Date: Mon, 1 Nov 2021 23:37:54 GMT
- Title: Deep neural networks as nested dynamical systems
- Authors: David I. Spivak, Timothy Hosgood
- Abstract summary: An analogy is often made between deep neural networks and actual brains, suggested by the nomenclature itself.
This article makes the case that the analogy should be different.
Since the "neurons" in deep neural networks are managing the changing weights, they are more akin to the synapses in the brain.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is an analogy that is often made between deep neural networks and
actual brains, suggested by the nomenclature itself: the "neurons" in deep
neural networks should correspond to neurons (or nerve cells, to avoid
confusion) in the brain. We claim, however, that this analogy doesn't even type
check: it is structurally flawed. In agreement with the slightly glib summary
of Hebbian learning as "cells that fire together wire together", this article
makes the case that the analogy should be different. Since the "neurons" in
deep neural networks are managing the changing weights, they are more akin to
the synapses in the brain; instead, it is the wires in deep neural networks
that are more like nerve cells, in that they are what cause the information to
flow. An intuition that nerve cells seem like more than mere wires is exactly
right, and is justified by a precise category-theoretic analogy which we will
explore in this article. Throughout, we will continue to highlight the error in
equating artificial neurons with nerve cells by leaving "neuron" in quotes or
by calling them artificial neurons.
We will first explain how to view deep neural networks as nested dynamical
systems with a very restricted sort of interaction pattern, and then explain a
more general sort of interaction for dynamical systems that is useful
throughout engineering, but which fails to adapt to changing circumstances. As
mentioned, an analogy is then forced upon us by the mathematical formalism in
which they are both embedded. We call the resulting encompassing generalization
deeply interacting learning systems: they have complex interaction as in
control theory, but adaptation to circumstances as in deep neural networks.
Related papers
- Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Sparse Quantized Hopfield Network for Online-Continual Memory [0.0]
Nervous systems learn online where a stream of noisy data points are presented in a non-independent, identically distributed (non-i.i.d.) way.
Deep networks, on the other hand, typically use non-local learning algorithms and are trained in an offline, non-noisy, i.i.d. setting.
We implement this kind of model in a novel neural network called the Sparse Quantized Hopfield Network (SQHN)
arXiv Detail & Related papers (2023-07-27T17:46:17Z) - A Hybrid Training Algorithm for Continuum Deep Learning Neuro-Skin
Neural Network [0.0]
Deep Learning NeuroSkin Neural Network is a new type of neural network presented recently by the authors.
A neuroskin is modelled using finite elements. Each element of the finite element represents a cell.
It is shown that while the neuroskin cannot present the desirable response, it improves gradually to the desired level.
arXiv Detail & Related papers (2023-02-03T15:54:06Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Using noise to probe recurrent neural network structure and prune
synapses [8.37609145576126]
Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning.
Noise is ubiquitous in neural systems, and often considered an irritant to be overcome.
Here we suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant.
arXiv Detail & Related papers (2020-11-14T16:51:05Z) - Continual Learning with Deep Artificial Neurons [0.0]
We introduce Deep Artificial Neurons (DANs), which are themselves realized as deep neural networks.
We demonstrate that it is possible to meta-learn a single parameter vector, which we dub a neuronal phenotype, shared by all DANs in the network.
We show that a suitable neuronal phenotype can endow a single network with an innate ability to update its synapses with minimal forgetting.
arXiv Detail & Related papers (2020-11-13T17:50:10Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Training of Deep Learning Neuro-Skin Neural Network [0.0]
Deep Learning Neuro-Skin Neural Network is a new type of neural network presented recently by the authors.
A neuroskin is modelled using finite elements. Each element of the finite element represents a cell.
It is shown that while the neuroskin can not present the desirable response, it improves gradually to the desired level.
arXiv Detail & Related papers (2020-07-03T18:51:45Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.