Using Reinforcement Learning to Simplify Mealtime Insulin Dosing for
People with Type 1 Diabetes: In-Silico Experiments
- URL: http://arxiv.org/abs/2309.09125v1
- Date: Sun, 17 Sep 2023 01:34:02 GMT
- Title: Using Reinforcement Learning to Simplify Mealtime Insulin Dosing for
People with Type 1 Diabetes: In-Silico Experiments
- Authors: Anas El Fathi, Marc D. Breton
- Abstract summary: People with type 1 diabetes (T1D) struggle to calculate the optimal insulin dose at mealtime.
We propose an RL agent that recommends the optimal meal-accompanying insulin dose corresponding to a qualitative meal (QM) strategy.
- Score: 0.40792653193642503
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: People with type 1 diabetes (T1D) struggle to calculate the optimal insulin
dose at mealtime, especially when under multiple daily injections (MDI)
therapy. Effectively, they will not always perform rigorous and precise
calculations, but occasionally, they might rely on intuition and previous
experience. Reinforcement learning (RL) has shown outstanding results in
outperforming humans on tasks requiring intuition and learning from experience.
In this work, we propose an RL agent that recommends the optimal
meal-accompanying insulin dose corresponding to a qualitative meal (QM)
strategy that does not require precise carbohydrate counting (CC) (e.g., a
usual meal at noon.). The agent is trained using the soft actor-critic approach
and comprises long short-term memory (LSTM) neurons. For training, eighty
virtual subjects (VS) of the FDA-accepted UVA/Padova T1D adult population were
simulated using MDI therapy and QM strategy. For validation, the remaining
twenty VS were examined in 26-week scenarios, including intra- and inter-day
variabilities in glucose. \textit{In-silico} results showed that the proposed
RL approach outperforms a baseline run-to-run approach and can replace the
standard CC approach. Specifically, after 26 weeks, the time-in-range
($70-180$mg/dL) and time-in-hypoglycemia ($<70$mg/dL) were $73.1\pm11.6$% and $
2.0\pm 1.8$% using the RL-optimized QM strategy compared to $70.6\pm14.8$% and
$ 1.5\pm 1.5$% using CC. Such an approach can simplify diabetes treatment,
resulting in improved quality of life and glycemic outcomes.
Related papers
- From Glucose Patterns to Health Outcomes: A Generalizable Foundation Model for Continuous Glucose Monitor Data Analysis [50.80532910808962]
We present GluFormer, a generative foundation model on biomedical temporal data based on a transformer architecture.
GluFormer generalizes to 15 different external datasets, including 4936 individuals across 5 different geographical regions.
It can also predict onset of future health outcomes even 4 years in advance.
arXiv Detail & Related papers (2024-08-20T13:19:06Z) - Large Language Model Distilling Medication Recommendation Model [61.89754499292561]
We harness the powerful semantic comprehension and input-agnostic characteristics of Large Language Models (LLMs)
Our research aims to transform existing medication recommendation methodologies using LLMs.
To mitigate this, we have developed a feature-level knowledge distillation technique, which transfers the LLM's proficiency to a more compact model.
arXiv Detail & Related papers (2024-02-05T08:25:22Z) - Basal-Bolus Advisor for Type 1 Diabetes (T1D) Patients Using Multi-Agent
Reinforcement Learning (RL) Methodology [0.0]
This paper presents a novel multi-agent reinforcement learning (RL) approach for personalized glucose control in individuals with type 1 diabetes (T1D)
The method employs a closed-loop system consisting of a blood glucose (BG) metabolic model and a multi-agent soft actor-critic RL model acting as the basal-bolus advisor.
Results demonstrate that the RL-based basal-bolus advisor significantly improves glucose control, reducing glycemic variability and increasing time spent within the target range.
arXiv Detail & Related papers (2023-07-17T23:50:51Z) - Building Brains: Subvolume Recombination for Data Augmentation in Large
Vessel Occlusion Detection [56.67577446132946]
A large training data set is required for a standard deep learning-based model to learn this strategy from data.
We propose an augmentation method that generates artificial training samples by recombining vessel tree segmentations of the hemispheres from different patients.
In line with the augmentation scheme, we use a 3D-DenseNet fed with task-specific input, fostering a side-by-side comparison between the hemispheres.
arXiv Detail & Related papers (2022-05-05T10:31:57Z) - Offline Reinforcement Learning for Safer Blood Glucose Control in People
with Type 1 Diabetes [1.1859913430860336]
Online reinforcement learning (RL) has been utilised as a method for further enhancing glucose control in diabetes devices.
This paper examines the utility of BCQ, CQL and TD3-BC in managing the blood glucose of the 30 virtual patients available within the FDA-approved UVA/Padova glucose dynamics simulator.
offline RL can significantly increase time in the healthy blood glucose range from 61.6 +- 0.3% to 65.3 +/- 0.5% when compared to the strongest state-of-art baseline.
arXiv Detail & Related papers (2022-04-07T11:52:12Z) - Enhancing Food Intake Tracking in Long-Term Care with Automated Food
Imaging and Nutrient Intake Tracking (AFINI-T) Technology [71.37011431958805]
Half of long-term care (LTC) residents are malnourished increasing hospitalization, mortality, morbidity, with lower quality of life.
This paper presents the automated food imaging and nutrient intake tracking (AFINI-T) technology designed for LTC.
arXiv Detail & Related papers (2021-12-08T22:25:52Z) - COVID-19 Detection from Chest X-ray Images using Imprinted Weights
Approach [67.05664774727208]
Chest radiography is an alternative screening method for the COVID-19.
Computer-aided diagnosis (CAD) has proven to be a viable solution at low cost and with fast speed.
To address this challenge, we propose the use of a low-shot learning approach named imprinted weights.
arXiv Detail & Related papers (2021-05-04T19:01:40Z) - Deep Reinforcement Learning for Closed-Loop Blood Glucose Control [12.989855325491163]
We develop reinforcement learning techniques for automated blood glucose control.
On over 2.1 million hours of data from 30 simulated patients, our RL approach outperforms baseline control algorithms.
arXiv Detail & Related papers (2020-09-18T20:15:02Z) - Challenging common bolus advisor for self-monitoring type-I diabetes
patients using Reinforcement Learning [0.0]
Patients with diabetes who are self-monitoring have to decide right before each meal how much insulin they should take.
We challenged this rule applying Reinforcement Learning techniques on data simulated with T1DM, an FDA-approved simulator.
arXiv Detail & Related papers (2020-07-23T09:38:54Z) - CovidDeep: SARS-CoV-2/COVID-19 Test Based on Wearable Medical Sensors
and Efficient Neural Networks [51.589769497681175]
The novel coronavirus (SARS-CoV-2) has led to a pandemic.
The current testing regime based on Reverse Transcription-Polymerase Chain Reaction for SARS-CoV-2 has been unable to keep up with testing demands.
We propose a framework called CovidDeep that combines efficient DNNs with commercially available WMSs for pervasive testing of the virus.
arXiv Detail & Related papers (2020-07-20T21:47:28Z) - Basal Glucose Control in Type 1 Diabetes using Deep Reinforcement
Learning: An In Silico Validation [16.93692520921499]
We propose a novel deep reinforcement learning model for single-hormone (insulin) and dual-hormone (insulin and glucagon) delivery.
In the adult cohort, percentage time in target range improved from 77.6% to 80.9% with single-hormone control.
In the adolescent cohort, percentage time in target range improved from 55.5% to 65.9% with single-hormone control.
arXiv Detail & Related papers (2020-05-18T20:13:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.