Learning Control Policies of Hodgkin-Huxley Neuronal Dynamics
- URL: http://arxiv.org/abs/2311.07563v1
- Date: Mon, 13 Nov 2023 18:53:50 GMT
- Title: Learning Control Policies of Hodgkin-Huxley Neuronal Dynamics
- Authors: Malvern Madondo, Deepanshu Verma, Lars Ruthotto, Nicholas Au Yong
- Abstract summary: We approximate the value function offline using a neural network to enable generating controls (stimuli) in real time via the feedback form.
Our numerical experiments illustrate the accuracy of our approach for out-of-distribution samples and the robustness to moderate shocks and disturbances in the system.
- Score: 1.629803445577911
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a neural network approach for closed-loop deep brain stimulation
(DBS). We cast the problem of finding an optimal neurostimulation strategy as a
control problem. In this setting, control policies aim to optimize therapeutic
outcomes by tailoring the parameters of a DBS system, typically via electrical
stimulation, in real time based on the patient's ongoing neuronal activity. We
approximate the value function offline using a neural network to enable
generating controls (stimuli) in real time via the feedback form. The neuronal
activity is characterized by a nonlinear, stiff system of differential
equations as dictated by the Hodgkin-Huxley model. Our training process
leverages the relationship between Pontryagin's maximum principle and
Hamilton-Jacobi-Bellman equations to update the value function estimates
simultaneously. Our numerical experiments illustrate the accuracy of our
approach for out-of-distribution samples and the robustness to moderate shocks
and disturbances in the system.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - The Neuron as a Direct Data-Driven Controller [43.8450722109081]
This study extends the current normative models, which primarily optimize prediction, by conceptualizing neurons as optimal feedback controllers.
We model neurons as biologically feasible controllers which implicitly identify loop dynamics, infer latent states and optimize control.
Our model presents a significant departure from the traditional, feedforward, instant-response McCulloch-Pitts-Rosenblatt neuron, offering a novel and biologically-informed fundamental unit for constructing neural networks.
arXiv Detail & Related papers (2024-01-03T01:24:10Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Near-optimal control of dynamical systems with neural ordinary
differential equations [0.0]
Recent advances in deep learning and neural network-based optimization have contributed to the development of methods that can help solve control problems involving high-dimensional dynamical systems.
We first analyze how truncated and non-truncated backpropagation through time affect runtime performance and the ability of neural networks to learn optimal control functions.
arXiv Detail & Related papers (2022-06-22T14:11:11Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Constrained plasticity reserve as a natural way to control frequency and
weights in spiking neural networks [0.0]
We show how cellular dynamics help neurons to filter out the intense signals to help neurons keep a stable firing rate.
Such an approach might be used in the machine learning domain to improve the robustness of AI systems.
arXiv Detail & Related papers (2021-03-15T05:22:14Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Estimation of the Mean Function of Functional Data via Deep Neural
Networks [6.230751621285321]
We propose a deep neural network method to perform nonparametric regression for functional data.
The proposed method is applied to analyze positron emission tomography images of patients with Alzheimer disease.
arXiv Detail & Related papers (2020-12-08T17:18:16Z) - Deep Reinforcement Learning for Neural Control [4.822598110892847]
We present a novel methodology for control of neural circuits based on deep reinforcement learning.
We map neural circuits and their connectome into a grid-world like setting and infers the actions needed to achieve aimed behavior.
Our framework successfully infers neuropeptidic currents and synaptic architectures for control of chemotaxis.
arXiv Detail & Related papers (2020-06-12T17:41:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.