Bayesian Continual Learning via Spiking Neural Networks
- URL: http://arxiv.org/abs/2208.13723v1
- Date: Mon, 29 Aug 2022 17:11:14 GMT
- Title: Bayesian Continual Learning via Spiking Neural Networks
- Authors: Nicolas Skatchkovsky, Hyeryung Jang, Osvaldo Simeone
- Abstract summary: We take steps towards the design of neuromorphic systems that are capable of adaptation to changing learning tasks.
We derive online learning rules for spiking neural networks (SNNs) within a Bayesian continual learning framework.
We instantiate the proposed approach for both real-valued and binary synaptic weights.
- Score: 38.518936229794214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Among the main features of biological intelligence are energy efficiency,
capacity for continual adaptation, and risk management via uncertainty
quantification. Neuromorphic engineering has been thus far mostly driven by the
goal of implementing energy-efficient machines that take inspiration from the
time-based computing paradigm of biological brains. In this paper, we take
steps towards the design of neuromorphic systems that are capable of adaptation
to changing learning tasks, while producing well-calibrated uncertainty
quantification estimates. To this end, we derive online learning rules for
spiking neural networks (SNNs) within a Bayesian continual learning framework.
In it, each synaptic weight is represented by parameters that quantify the
current epistemic uncertainty resulting from prior knowledge and observed data.
The proposed online rules update the distribution parameters in a streaming
fashion as data are observed. We instantiate the proposed approach for both
real-valued and binary synaptic weights. Experimental results using Intel's
Lava platform show the merits of Bayesian over frequentist learning in terms of
capacity for adaptation and uncertainty quantification.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Enhanced quantum state preparation via stochastic prediction of neural
network [0.8287206589886881]
In this paper, we explore an intriguing avenue for enhancing algorithm effectiveness through exploiting the knowledge blindness of neural network.
Our approach centers around a machine learning algorithm utilized for preparing arbitrary quantum states in a semiconductor double quantum dot system.
By leveraging prediction generated by the neural network, we are able to guide the optimization process to escape local optima.
arXiv Detail & Related papers (2023-07-27T09:11:53Z) - Statistical mechanics of continual learning: variational principle and
mean-field potential [1.559929646151698]
We focus on continual learning in single-layered and multi-layered neural networks of binary weights.
A variational Bayesian learning setting is proposed, where the neural networks are trained in a field-space.
Weight uncertainty is naturally incorporated, and modulates synaptic resources among tasks.
Our proposed frameworks also connect to elastic weight consolidation, weight-uncertainty learning, and neuroscience inspired metaplasticity.
arXiv Detail & Related papers (2022-12-06T09:32:45Z) - Efficient Bayes Inference in Neural Networks through Adaptive Importance
Sampling [19.518237361775533]
In BNNs, a complete posterior distribution of the unknown weight and bias parameters of the network is produced during the training stage.
This feature is useful in countless machine learning applications.
It is particularly appealing in areas where decision-making has a crucial impact, such as medical healthcare or autonomous driving.
arXiv Detail & Related papers (2022-10-03T14:59:23Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Analytic Mutual Information in Bayesian Neural Networks [0.8122270502556371]
Mutual information is an example of an uncertainty measure in a Bayesian neural network to quantify uncertainty.
We derive the analytical formula of the mutual information between model parameters and the predictive output by leveraging the notion of the point process entropy.
As an application, we discuss the estimation of the Dirichlet parameters and show its practical application in the active learning uncertainty measures.
arXiv Detail & Related papers (2022-01-24T17:30:54Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - EqSpike: Spike-driven Equilibrium Propagation for Neuromorphic
Implementations [9.952561670370804]
We develop a spiking neural network algorithm called EqSpike, compatible with neuromorphic systems.
We show that EqSpike implemented in silicon neuromorphic technology could reduce the energy consumption of inference and training respectively.
arXiv Detail & Related papers (2020-10-15T16:25:29Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.