Offline Learning of Closed-Loop Deep Brain Stimulation Controllers for
Parkinson Disease Treatment
- URL: http://arxiv.org/abs/2302.02477v2
- Date: Thu, 9 Feb 2023 01:36:10 GMT
- Title: Offline Learning of Closed-Loop Deep Brain Stimulation Controllers for
Parkinson Disease Treatment
- Authors: Qitong Gao, Stephen L. Schimdt, Afsana Chowdhury, Guangyu Feng,
Jennifer J. Peters, Katherine Genty, Warren M. Grill, Dennis A. Turner,
Miroslav Pajic
- Abstract summary: Deep brain stimulation (DBS) has shown great promise toward treating motor symptoms caused by Parkinson's disease (PD)
DBS devices approved by the U.S. Food and Drug Administration (FDA) can only deliver continuous DBS (cDBS) stimuli at a fixed amplitude.
This energy inefficient operation reduces battery lifetime of the device, cannot adapt treatment dynamically for activity, and may cause significant side-effects.
- Score: 6.576864734526406
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep brain stimulation (DBS) has shown great promise toward treating motor
symptoms caused by Parkinson's disease (PD), by delivering electrical pulses to
the Basal Ganglia (BG) region of the brain. However, DBS devices approved by
the U.S. Food and Drug Administration (FDA) can only deliver continuous DBS
(cDBS) stimuli at a fixed amplitude; this energy inefficient operation reduces
battery lifetime of the device, cannot adapt treatment dynamically for
activity, and may cause significant side-effects (e.g., gait impairment). In
this work, we introduce an offline reinforcement learning (RL) framework,
allowing the use of past clinical data to train an RL policy to adjust the
stimulation amplitude in real time, with the goal of reducing energy use while
maintaining the same level of treatment (i.e., control) efficacy as cDBS.
Moreover, clinical protocols require the safety and performance of such RL
controllers to be demonstrated ahead of deployments in patients. Thus, we also
introduce an offline policy evaluation (OPE) method to estimate the performance
of RL policies using historical data, before deploying them on patients. We
evaluated our framework on four PD patients equipped with the RC+S DBS system,
employing the RL controllers during monthly clinical visits, with the overall
control efficacy evaluated by severity of symptoms (i.e., bradykinesia and
tremor), changes in PD biomakers (i.e., local field potentials), and patient
ratings. The results from clinical experiments show that our RL-based
controller maintains the same level of control efficacy as cDBS, but with
significantly reduced stimulation energy. Further, the OPE method is shown
effective in accurately estimating and ranking the expected returns of RL
controllers.
Related papers
- Offline Behavior Distillation [57.6900189406964]
Massive reinforcement learning (RL) data are typically collected to train policies offline without the need for interactions.
We formulate offline behavior distillation (OBD), which synthesizes limited expert behavioral data from sub-optimal RL data.
We propose two naive OBD objectives, DBC and PBC, which measure distillation performance via the decision difference between policies trained on distilled data and either offline data or a near-expert policy.
arXiv Detail & Related papers (2024-10-30T06:28:09Z) - Preliminary Results of Neuromorphic Controller Design and a Parkinson's Disease Dataset Building for Closed-Loop Deep Brain Stimulation [1.3044677039636754]
Closed-loop Deep Brain Stimulation (CL-DBS) aims to alleviate motor symptoms in Parkinson's Disease patients.
Current CL-DBS systems utilize energy-inefficient approaches, including reinforcement learning, fuzzy interface, and field-programmable gate array (FPGA)
This research proposes a novel neuromorphic approach that builds upon Leaky Integrate and Fire neuron (LIF) controllers to adjust the magnitude of DBS electric signals according to the various severities of PD patients.
arXiv Detail & Related papers (2024-07-25T04:10:15Z) - DTR-Bench: An in silico Environment and Benchmark Platform for Reinforcement Learning Based Dynamic Treatment Regime [18.443316087890324]
Reinforcement learning (RL) has garnered increasing recognition for its potential to optimise dynamic treatment regimes (DTRs) in personalised medicine.
We introduce textitDTR-Bench, a benchmarking platform for simulating diverse healthcare scenarios.
We evaluate various state-of-the-art RL algorithms across these settings, particularly highlighting their performance amidst real-world challenges.
arXiv Detail & Related papers (2024-05-28T21:40:00Z) - Towards Understanding the Robustness of Diffusion-Based Purification: A Stochastic Perspective [65.10019978876863]
Diffusion-Based Purification (DBP) has emerged as an effective defense mechanism against adversarial attacks.
In this paper, we argue that the inherentity in the DBP process is the primary driver of its robustness.
arXiv Detail & Related papers (2024-04-22T16:10:38Z) - An Improved Strategy for Blood Glucose Control Using Multi-Step Deep Reinforcement Learning [3.5757761767474876]
Blood Glucose (BG) control involves keeping an individual's BG within a healthy range through extracorporeal insulin injections.
Recent research has been devoted to exploring individualized and automated BG control approaches.
Deep Reinforcement Learning (DRL) shows potential as an emerging approach.
arXiv Detail & Related papers (2024-03-12T11:53:00Z) - {\epsilon}-Neural Thompson Sampling of Deep Brain Stimulation for
Parkinson Disease Treatment [15.303196613362099]
We propose a contextual multi-armed bandits (CMAB) solution for a Deep Brain Stimulation (DBS) device.
We define the context as the signals capturing irregular neuronal firing activities in the basal ganglia (BG) regions.
An epsilon-exploring strategy is introduced on top of the classic Thompson sampling method, leading to an algorithm called epsilon-NeuralTS.
arXiv Detail & Related papers (2024-03-11T15:33:40Z) - Multimodal Indoor Localisation in Parkinson's Disease for Detecting
Medication Use: Observational Pilot Study in a Free-Living Setting [2.1726452647707792]
Parkinson's disease (PD) is a slowly progressive, neurodegenerative disease which causes motor symptoms including gait dysfunction.
Motor fluctuations are alterations between periods with a positive response to levodopa therapy ("on") and periods marked by re-emergency of PD symptoms ("off") as the response to medication wears off.
These fluctuations often affect gait speed and they increase in their disabling impact as PD progresses.
A sub-objective aims to evaluate whether indoor localisation, including its in-home gait speed features, could be used to evaluate motor fluctuations by detecting whether the person with PD is taking levodopa medications or withholding them
arXiv Detail & Related papers (2023-08-03T08:55:21Z) - Efficient Deep Reinforcement Learning Requires Regulating Overfitting [91.88004732618381]
We show that high temporal-difference (TD) error on the validation set of transitions is the main culprit that severely affects the performance of deep RL algorithms.
We show that a simple online model selection method that targets the validation TD error is effective across state-based DMC and Gym tasks.
arXiv Detail & Related papers (2023-04-20T17:11:05Z) - Continuous Decoding of Daily-Life Hand Movements from Forearm Muscle
Activity for Enhanced Myoelectric Control of Hand Prostheses [78.120734120667]
We introduce a novel method, based on a long short-term memory (LSTM) network, to continuously map forearm EMG activity onto hand kinematics.
Ours is the first reported work on the prediction of hand kinematics that uses this challenging dataset.
Our results suggest that the presented method is suitable for the generation of control signals for the independent and proportional actuation of the multiple DOFs of state-of-the-art hand prostheses.
arXiv Detail & Related papers (2021-04-29T00:11:32Z) - Detecting Parkinsonian Tremor from IMU Data Collected In-The-Wild using
Deep Multiple-Instance Learning [59.74684475991192]
Parkinson's Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old.
PD symptoms include tremor, rigidity and braykinesia.
We present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device.
arXiv Detail & Related papers (2020-05-06T09:02:30Z) - Robust Deep Reinforcement Learning against Adversarial Perturbations on
State Observations [88.94162416324505]
A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises.
Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions.
We show that naively applying existing techniques on improving robustness for classification tasks, like adversarial training, is ineffective for many RL tasks.
arXiv Detail & Related papers (2020-03-19T17:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.