Deep Reinforcement Learning for Neural Control
- URL: http://arxiv.org/abs/2006.07352v1
- Date: Fri, 12 Jun 2020 17:41:12 GMT
- Title: Deep Reinforcement Learning for Neural Control
- Authors: Jimin Kim, Eli Shlizerman
- Abstract summary: We present a novel methodology for control of neural circuits based on deep reinforcement learning.
We map neural circuits and their connectome into a grid-world like setting and infers the actions needed to achieve aimed behavior.
Our framework successfully infers neuropeptidic currents and synaptic architectures for control of chemotaxis.
- Score: 4.822598110892847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel methodology for control of neural circuits based on deep
reinforcement learning. Our approach achieves aimed behavior by generating
external continuous stimulation of existing neural circuits (neuromodulation
control) or modulations of neural circuits architecture (connectome control).
Both forms of control are challenging due to nonlinear and recurrent complexity
of neural activity. To infer candidate control policies, our approach maps
neural circuits and their connectome into a grid-world like setting and infers
the actions needed to achieve aimed behavior. The actions are inferred by
adaptation of deep Q-learning methods known for their robust performance in
navigating grid-worlds. We apply our approach to the model of \textit{C.
elegans} which simulates the full somatic nervous system with muscles and body.
Our framework successfully infers neuropeptidic currents and synaptic
architectures for control of chemotaxis. Our findings are consistent with in
vivo measurements and provide additional insights into neural control of
chemotaxis. We further demonstrate the generality and scalability of our
methods by inferring chemotactic neural circuits from scratch.
Related papers
- Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Neural Co-Processors for Restoring Brain Function: Results from a
Cortical Model of Grasping [0.0]
We propose "neural co-processors" which use artificial neural networks and deep learning to learn optimal closed-loop stimulation policies.
The co-processor adapts the stimulation policy as the biological circuit itself adapts to the stimulation, achieving a form of brain-device co-adaptation.
arXiv Detail & Related papers (2022-10-19T04:13:33Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Learning to Modulate Random Weights: Neuromodulation-inspired Neural
Networks For Efficient Continual Learning [1.9580473532948401]
We introduce a novel neural network architecture inspired by neuromodulation in biological nervous systems.
We show that this approach has strong learning performance per task despite the very small number of learnable parameters.
arXiv Detail & Related papers (2022-04-08T21:12:13Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.