Reinforcement Learning Framework for Deep Brain Stimulation Study
- URL: http://arxiv.org/abs/2002.10948v1
- Date: Sat, 22 Feb 2020 16:48:43 GMT
- Title: Reinforcement Learning Framework for Deep Brain Stimulation Study
- Authors: Dmitrii Krylov, Remi Tachet, Romain Laroche, Michael Rosenblum, Dmitry
V. Dylov
- Abstract summary: Malfunctioning neurons in the brain sometimes operate synchronously, reportedly causing many neurological diseases.
We present the first Reinforcement Learning gym framework that emulates this collective behavior of neurons.
We successfully suppress synchrony via RL for three pathological signaling regimes, characterize the framework's stability to noise, and further remove the unwanted oscillations by engaging multiple PPO agents.
- Score: 10.505656411009388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Malfunctioning neurons in the brain sometimes operate synchronously,
reportedly causing many neurological diseases, e.g. Parkinson's. Suppression
and control of this collective synchronous activity are therefore of great
importance for neuroscience, and can only rely on limited engineering trials
due to the need to experiment with live human brains. We present the first
Reinforcement Learning gym framework that emulates this collective behavior of
neurons and allows us to find suppression parameters for the environment of
synthetic degenerate models of neurons. We successfully suppress synchrony via
RL for three pathological signaling regimes, characterize the framework's
stability to noise, and further remove the unwanted oscillations by engaging
multiple PPO agents.
Related papers
- Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - The Neuron as a Direct Data-Driven Controller [43.8450722109081]
This study extends the current normative models, which primarily optimize prediction, by conceptualizing neurons as optimal feedback controllers.
We model neurons as biologically feasible controllers which implicitly identify loop dynamics, infer latent states and optimize control.
Our model presents a significant departure from the traditional, feedforward, instant-response McCulloch-Pitts-Rosenblatt neuron, offering a novel and biologically-informed fundamental unit for constructing neural networks.
arXiv Detail & Related papers (2024-01-03T01:24:10Z) - Personalized identification, prediction, and stimulation of neural
oscillations via data-driven models of epileptic network dynamics [0.0]
We develop a framework to extract predictive models of epileptic network dynamics directly from EEG data.
We show that it is possible to build a direct correspondence between the models of brain-network dynamics under periodic driving.
This suggests that periodic brain stimulation can drive pathological states of epileptic network dynamics towards a healthy functional brain state.
arXiv Detail & Related papers (2023-10-20T13:21:31Z) - Novel Reinforcement Learning Algorithm for Suppressing Synchronization
in Closed Loop Deep Brain Stimulators [0.6294759639481188]
Parkinson's disease is marked by altered and increased firing characteristics of pathological oscillations in the brain.
Deep brain stimulators (DBS) are used to examine and regulate the synchronization and pathological oscillations in motor circuits.
This research proposes a novel reinforcement learning framework for suppressing the synchronization in neuronal activity during episodes of neurological disorders with less power consumption.
arXiv Detail & Related papers (2022-12-25T11:29:55Z) - Modeling Associative Plasticity between Synapses to Enhance Learning of
Spiking Neural Networks [4.736525128377909]
Spiking Neural Networks (SNNs) are the third generation of artificial neural networks that enable energy-efficient implementation on neuromorphic hardware.
We propose a robust and effective learning mechanism by modeling the associative plasticity between synapses.
Our approaches achieve superior performance on static and state-of-the-art neuromorphic datasets.
arXiv Detail & Related papers (2022-07-24T06:12:23Z) - When, where, and how to add new neurons to ANNs [3.0969191504482243]
Neurogenesis in ANNs is an understudied and difficult problem, even compared to other forms of structural learning like pruning.
We introduce a framework for studying the various facets of neurogenesis: when, where, and how to add neurons during the learning process.
arXiv Detail & Related papers (2022-02-17T09:32:08Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - The distribution of inhibitory neurons in the C. elegans connectome
facilitates self-optimization of coordinated neural activity [78.15296214629433]
The nervous system of the nematode Caenorhabditis elegans exhibits remarkable complexity despite the worm's small size.
A general challenge is to better understand the relationship between neural organization and neural activity at the system level.
We implemented an abstract simulation model of the C. elegans connectome that approximates the neurotransmitter identity of each neuron.
arXiv Detail & Related papers (2020-10-28T23:11:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.