Continual Learning with Deep Artificial Neurons
- URL: http://arxiv.org/abs/2011.07035v1
- Date: Fri, 13 Nov 2020 17:50:10 GMT
- Title: Continual Learning with Deep Artificial Neurons
- Authors: Blake Camp, Jaya Krishna Mandivarapu, Rolando Estrada
- Abstract summary: We introduce Deep Artificial Neurons (DANs), which are themselves realized as deep neural networks.
We demonstrate that it is possible to meta-learn a single parameter vector, which we dub a neuronal phenotype, shared by all DANs in the network.
We show that a suitable neuronal phenotype can endow a single network with an innate ability to update its synapses with minimal forgetting.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Neurons in real brains are enormously complex computational units. Among
other things, they're responsible for transforming inbound electro-chemical
vectors into outbound action potentials, updating the strengths of intermediate
synapses, regulating their own internal states, and modulating the behavior of
other nearby neurons. One could argue that these cells are the only things
exhibiting any semblance of real intelligence. It is odd, therefore, that the
machine learning community has, for so long, relied upon the assumption that
this complexity can be reduced to a simple sum and fire operation. We ask,
might there be some benefit to substantially increasing the computational power
of individual neurons in artificial systems? To answer this question, we
introduce Deep Artificial Neurons (DANs), which are themselves realized as deep
neural networks. Conceptually, we embed DANs inside each node of a traditional
neural network, and we connect these neurons at multiple synaptic sites,
thereby vectorizing the connections between pairs of cells. We demonstrate that
it is possible to meta-learn a single parameter vector, which we dub a neuronal
phenotype, shared by all DANs in the network, which facilitates a
meta-objective during deployment. Here, we isolate continual learning as our
meta-objective, and we show that a suitable neuronal phenotype can endow a
single network with an innate ability to update its synapses with minimal
forgetting, using standard backpropagation, without experience replay, nor
separate wake/sleep phases. We demonstrate this ability on sequential
non-linear regression tasks.
Related papers
- Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - Learning to Act through Evolution of Neural Diversity in Random Neural
Networks [9.387749254963595]
In most artificial neural networks (ANNs), neural computation is abstracted to an activation function that is usually shared between all neurons.
We propose the optimization of neuro-centric parameters to attain a set of diverse neurons that can perform complex computations.
arXiv Detail & Related papers (2023-05-25T11:33:04Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Deep neural networks as nested dynamical systems [0.0]
An analogy is often made between deep neural networks and actual brains, suggested by the nomenclature itself.
This article makes the case that the analogy should be different.
Since the "neurons" in deep neural networks are managing the changing weights, they are more akin to the synapses in the brain.
arXiv Detail & Related papers (2021-11-01T23:37:54Z) - IC Neuron: An Efficient Unit to Construct Neural Networks [8.926478245654703]
We propose a new neuron model that can represent more complex distributions.
The Inter-layer collision (IC) neuron divides the input space into multiple subspaces used to represent different linear transformations.
We build the IC networks by integrating the IC neurons into the fully-connected (FC), convolutional, and recurrent structures.
arXiv Detail & Related papers (2020-11-23T08:36:48Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.