MT-SNN: Spiking Neural Network that Enables Single-Tasking of Multiple
Tasks
- URL: http://arxiv.org/abs/2208.01522v1
- Date: Tue, 2 Aug 2022 15:17:07 GMT
- Title: MT-SNN: Spiking Neural Network that Enables Single-Tasking of Multiple
Tasks
- Authors: Paolo G. Cachi, Sebastian Ventura, Krzysztof J. Cios
- Abstract summary: We implement a multi-task spiking neural network (MT-SNN) that can learn two or more classification tasks while performing one task at a time.
The network is implemented using Intel's Lava platform for the Loihi2 neuromorphic chip.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper we explore capabilities of spiking neural networks in solving
multi-task classification problems using the approach of single-tasking of
multiple tasks. We designed and implemented a multi-task spiking neural network
(MT-SNN) that can learn two or more classification tasks while performing one
task at a time. The task to perform is selected by modulating the firing
threshold of leaky integrate and fire neurons used in this work. The network is
implemented using Intel's Lava platform for the Loihi2 neuromorphic chip. Tests
are performed on dynamic multitask classification for NMNIST data. The results
show that MT-SNN effectively learns multiple tasks by modifying its dynamics,
namely, the spiking neurons' firing threshold.
Related papers
- Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks [69.38572074372392]
We present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks.
Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks.
arXiv Detail & Related papers (2023-07-13T16:39:08Z) - Forget-free Continual Learning with Soft-Winning SubNetworks [67.0373924836107]
We investigate two proposed continual learning methods which sequentially learn and select adaptive binary- (WSN) and non-binary Soft-Subnetworks (SoftNet) for each task.
WSN and SoftNet jointly learn the regularized model weights and task-adaptive non-binary masks ofworks associated with each task.
In Task Incremental Learning (TIL), binary masks spawned per winning ticket are encoded into one N-bit binary digit mask, then compressed using Huffman coding for a sub-linear increase in network capacity to the number of tasks.
arXiv Detail & Related papers (2023-03-27T07:53:23Z) - L-HYDRA: Multi-Head Physics-Informed Neural Networks [0.0]
We construct multi-head physics-informed neural networks (MH-PINNs) as potent tool for multi-task learning (MTL), generative modeling, and few-shot learning.
MH-PINNs connect multiple functions/tasks via a shared body as the basis functions as well as a shared distribution for the head.
We demonstrate the effectiveness of MH-PINNs in five benchmarks, investigating also the possibility of synergistic learning in regression analysis.
arXiv Detail & Related papers (2023-01-05T16:54:01Z) - Optical multi-task learning using multi-wavelength diffractive deep
neural networks [8.543496127018567]
Photonic neural networks are brain-inspired information processing technology using photons instead of electrons to perform AI tasks.
Existing architectures are designed for a single task but fail to multiplex different tasks in parallel within a single monolithic system.
This paper proposes a novel optical multi-task learning system by designing multi-wavelength diffractive deep neural networks (D2NNs) with the joint optimization method.
arXiv Detail & Related papers (2022-11-30T14:27:14Z) - Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners [67.5865966762559]
We study whether sparsely activated Mixture-of-Experts (MoE) improve multi-task learning.
We devise task-aware gating functions to route examples from different tasks to specialized experts.
This results in a sparsely activated multi-task model with a large number of parameters, but with the same computational cost as that of a dense model.
arXiv Detail & Related papers (2022-04-16T00:56:12Z) - Multi-Task Neural Processes [105.22406384964144]
We develop multi-task neural processes, a new variant of neural processes for multi-task learning.
In particular, we propose to explore transferable knowledge from related tasks in the function space to provide inductive bias for improving each individual task.
Results demonstrate the effectiveness of multi-task neural processes in transferring useful knowledge among tasks for multi-task learning.
arXiv Detail & Related papers (2021-11-10T17:27:46Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z) - Continuous Learning in a Single-Incremental-Task Scenario with Spike
Features [0.0]
Deep Neural Networks (DNNs) have two key deficiencies, their dependence on high precision computing and their inability to perform sequential learning.
Here, we use bio-inspired Spike Timing Dependent Plasticity (STDP)in the feature extraction layers of the network with instantaneous neurons to extract meaningful features.
In the classification sections of the network we use a modified synaptic intelligence that we refer to as cost per synapse metric as a regularizer to immunize the network against catastrophic forgetting.
arXiv Detail & Related papers (2020-05-03T16:18:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.