Deep Reinforcement Learning for Closed-Loop Blood Glucose Control
- URL: http://arxiv.org/abs/2009.09051v1
- Date: Fri, 18 Sep 2020 20:15:02 GMT
- Title: Deep Reinforcement Learning for Closed-Loop Blood Glucose Control
- Authors: Ian Fox, Joyce Lee, Rodica Pop-Busui, Jenna Wiens
- Abstract summary: We develop reinforcement learning techniques for automated blood glucose control.
On over 2.1 million hours of data from 30 simulated patients, our RL approach outperforms baseline control algorithms.
- Score: 12.989855325491163
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: People with type 1 diabetes (T1D) lack the ability to produce the insulin
their bodies need. As a result, they must continually make decisions about how
much insulin to self-administer to adequately control their blood glucose
levels. Longitudinal data streams captured from wearables, like continuous
glucose monitors, can help these individuals manage their health, but currently
the majority of the decision burden remains on the user. To relieve this
burden, researchers are working on closed-loop solutions that combine a
continuous glucose monitor and an insulin pump with a control algorithm in an
`artificial pancreas.' Such systems aim to estimate and deliver the appropriate
amount of insulin. Here, we develop reinforcement learning (RL) techniques for
automated blood glucose control. Through a series of experiments, we compare
the performance of different deep RL approaches to non-RL approaches. We
highlight the flexibility of RL approaches, demonstrating how they can adapt to
new individuals with little additional data. On over 2.1 million hours of data
from 30 simulated patients, our RL approach outperforms baseline control
algorithms: leading to a decrease in median glycemic risk of nearly 50% from
8.34 to 4.24 and a decrease in total time hypoglycemic of 99.8%, from 4,610
days to 6. Moreover, these approaches are able to adapt to predictable meal
times (decreasing average risk by an additional 24% as meals increase in
predictability). This work demonstrates the potential of deep RL to help people
with T1D manage their blood glucose levels without requiring expert knowledge.
All of our code is publicly available, allowing for replication and extension.
Related papers
- From Glucose Patterns to Health Outcomes: A Generalizable Foundation Model for Continuous Glucose Monitor Data Analysis [50.80532910808962]
We present GluFormer, a generative foundation model on biomedical temporal data based on a transformer architecture.
GluFormer generalizes to 15 different external datasets, including 4936 individuals across 5 different geographical regions.
It can also predict onset of future health outcomes even 4 years in advance.
arXiv Detail & Related papers (2024-08-20T13:19:06Z) - Attention Networks for Personalized Mealtime Insulin Dosing in People with Type 1 Diabetes [0.30723404270319693]
We demonstrate how a reinforcement learning agent, employing a self-attention encoder network, can effectively mimic and enhance this intuitive process.
Results reveal a significant reduction in glycemic risk, from 16.5 to 9.6 in scenarios using sensor-augmented pump and from 9.1 to 6.7 in scenarios using automated insulin delivery.
arXiv Detail & Related papers (2024-06-18T17:59:32Z) - Compressing Deep Reinforcement Learning Networks with a Dynamic
Structured Pruning Method for Autonomous Driving [63.155562267383864]
Deep reinforcement learning (DRL) has shown remarkable success in complex autonomous driving scenarios.
DRL models inevitably bring high memory consumption and computation, which hinders their wide deployment in resource-limited autonomous driving devices.
We introduce a novel dynamic structured pruning approach that gradually removes a DRL model's unimportant neurons during the training stage.
arXiv Detail & Related papers (2024-02-07T09:00:30Z) - Using Reinforcement Learning to Simplify Mealtime Insulin Dosing for
People with Type 1 Diabetes: In-Silico Experiments [0.40792653193642503]
People with type 1 diabetes (T1D) struggle to calculate the optimal insulin dose at mealtime.
We propose an RL agent that recommends the optimal meal-accompanying insulin dose corresponding to a qualitative meal (QM) strategy.
arXiv Detail & Related papers (2023-09-17T01:34:02Z) - Efficient Diffusion Policies for Offline Reinforcement Learning [85.73757789282212]
Diffsuion-QL significantly boosts the performance of offline RL by representing a policy with a diffusion model.
We propose efficient diffusion policy (EDP) to overcome these two challenges.
EDP constructs actions from corrupted ones at training to avoid running the sampling chain.
arXiv Detail & Related papers (2023-05-31T17:55:21Z) - Building Brains: Subvolume Recombination for Data Augmentation in Large
Vessel Occlusion Detection [56.67577446132946]
A large training data set is required for a standard deep learning-based model to learn this strategy from data.
We propose an augmentation method that generates artificial training samples by recombining vessel tree segmentations of the hemispheres from different patients.
In line with the augmentation scheme, we use a 3D-DenseNet fed with task-specific input, fostering a side-by-side comparison between the hemispheres.
arXiv Detail & Related papers (2022-05-05T10:31:57Z) - Offline Reinforcement Learning for Safer Blood Glucose Control in People
with Type 1 Diabetes [1.1859913430860336]
Online reinforcement learning (RL) has been utilised as a method for further enhancing glucose control in diabetes devices.
This paper examines the utility of BCQ, CQL and TD3-BC in managing the blood glucose of the 30 virtual patients available within the FDA-approved UVA/Padova glucose dynamics simulator.
offline RL can significantly increase time in the healthy blood glucose range from 61.6 +- 0.3% to 65.3 +/- 0.5% when compared to the strongest state-of-art baseline.
arXiv Detail & Related papers (2022-04-07T11:52:12Z) - Enhancing Food Intake Tracking in Long-Term Care with Automated Food
Imaging and Nutrient Intake Tracking (AFINI-T) Technology [71.37011431958805]
Half of long-term care (LTC) residents are malnourished increasing hospitalization, mortality, morbidity, with lower quality of life.
This paper presents the automated food imaging and nutrient intake tracking (AFINI-T) technology designed for LTC.
arXiv Detail & Related papers (2021-12-08T22:25:52Z) - SOUL: An Energy-Efficient Unsupervised Online Learning Seizure Detection
Classifier [68.8204255655161]
Implantable devices that record neural activity and detect seizures have been adopted to issue warnings or trigger neurostimulation to suppress seizures.
For an implantable seizure detection system, a low power, at-the-edge, online learning algorithm can be employed to dynamically adapt to neural signal drifts.
SOUL was fabricated in TSMC's 28 nm process occupying 0.1 mm2 and achieves 1.5 nJ/classification energy efficiency, which is at least 24x more efficient than state-of-the-art.
arXiv Detail & Related papers (2021-10-01T23:01:20Z) - LSTMs and Deep Residual Networks for Carbohydrate and Bolus
Recommendations in Type 1 Diabetes Management [4.01573226844961]
We introduce an LSTM-based approach to blood glucose level prediction aimed at "what if" scenarios.
We then derive a novel architecture for the same recommendation task.
Experimental evaluations using real patient data from the OhioT1DM dataset show that the new integrated architecture compares favorably with the previous LSTM-based approach.
arXiv Detail & Related papers (2021-03-06T19:06:14Z) - Basal Glucose Control in Type 1 Diabetes using Deep Reinforcement
Learning: An In Silico Validation [16.93692520921499]
We propose a novel deep reinforcement learning model for single-hormone (insulin) and dual-hormone (insulin and glucagon) delivery.
In the adult cohort, percentage time in target range improved from 77.6% to 80.9% with single-hormone control.
In the adolescent cohort, percentage time in target range improved from 55.5% to 65.9% with single-hormone control.
arXiv Detail & Related papers (2020-05-18T20:13:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.