Synaptic metaplasticity in binarized neural networks
- URL: http://arxiv.org/abs/2101.07592v1
- Date: Tue, 19 Jan 2021 12:32:07 GMT
- Title: Synaptic metaplasticity in binarized neural networks
- Authors: Axel Laborieux, Maxence Ernoult, Tifenn Hirtzlin and Damien Querlioz
- Abstract summary: Neuroscience suggests that biological synapses avoid this issue through the process of synaptic consolidation and metaplasticity.
In this work, we show that this concept of metaplasticity can be transferred to a particular type of deep neural networks, binarized neural networks, to reduce catastrophic forgetting.
- Score: 4.243926243206826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unlike the brain, artificial neural networks, including state-of-the-art deep
neural networks for computer vision, are subject to "catastrophic forgetting":
they rapidly forget the previous task when trained on a new one. Neuroscience
suggests that biological synapses avoid this issue through the process of
synaptic consolidation and metaplasticity: the plasticity itself changes upon
repeated synaptic events. In this work, we show that this concept of
metaplasticity can be transferred to a particular type of deep neural networks,
binarized neural networks, to reduce catastrophic forgetting.
Related papers
- Evolving Self-Assembling Neural Networks: From Spontaneous Activity to Experience-Dependent Learning [7.479827648985631]
We propose a class of self-organizing neural networks capable of synaptic and structural plasticity in an activity and reward-dependent manner.
Our results demonstrate the ability of the model to learn from experiences in different control tasks starting from randomly connected or empty networks.
arXiv Detail & Related papers (2024-06-14T07:36:21Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Connected Hidden Neurons (CHNNet): An Artificial Neural Network for
Rapid Convergence [0.6218519716921521]
We propose a more robust model of artificial neural networks where the hidden neurons, residing in the same hidden layer, are interconnected that leads to rapid convergence.
With the experimental study of our proposed model in deep networks, we demonstrate that the model results in a noticeable increase in convergence rate compared to the conventional feed-forward neural network.
arXiv Detail & Related papers (2023-05-17T14:00:38Z) - Emergent Computations in Trained Artificial Neural Networks and Real
Brains [0.0]
How do cortical circuits use plasticity to acquire functions such as decision-making or working memory?
Here we describe how to train recurrent neural networks in tasks like those used to train animals in neuroscience laboratories.
Surprisingly, artificial networks and real brains can use similar computational strategies.
arXiv Detail & Related papers (2022-12-09T15:46:10Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Condition Integration Memory Network: An Interpretation of the Meaning
of the Neuronal Design [10.421465303670638]
This document introduces a hypothetical framework for the functional nature of primitive neural networks.
It analyzes the idea that the activity of neurons and synapses can symbolically reenact the dynamic changes in the world.
It achieves this without participating in an algorithmic structure.
arXiv Detail & Related papers (2021-05-21T05:59:27Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Synaptic Metaplasticity in Binarized Neural Networks [4.243926243206826]
Deep neural networks are prone to catastrophic forgetting upon training a new task.
We propose and demonstrate experimentally, in situations of multitask and stream learning, a training technique that reduces catastrophic forgetting without needing previously presented data.
This work bridges computational neuroscience and deep learning, and presents significant assets for future embedded and neuromorphic systems.
arXiv Detail & Related papers (2020-03-07T08:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.