Blood Glucose Control Via Pre-trained Counterfactual Invertible Neural Networks
- URL: http://arxiv.org/abs/2405.17458v2
- Date: Thu, 18 Jul 2024 06:54:04 GMT
- Title: Blood Glucose Control Via Pre-trained Counterfactual Invertible Neural Networks
- Authors: Jingchi Jiang, Rujia Shen, Boran Wang, Yi Guan,
- Abstract summary: We propose an introspective reinforcement learning (RL) based on Counterfactual Invertible Neural Networks (CINN)
We use the pre-trained CINN as a frozen introspective block of the RL agent, which integrates forward prediction and counterfactual inference to guide the policy updates.
We experimentally validate the accuracy and generalization ability of the pre-trained CINN in BG prediction and counterfactual inference for action.
- Score: 3.7217371773133325
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Type 1 diabetes mellitus (T1D) is characterized by insulin deficiency and blood glucose (BG) control issues. The state-of-the-art solution for continuous BG control is reinforcement learning (RL), where an agent can dynamically adjust exogenous insulin doses in time to maintain BG levels within the target range. However, due to the lack of action guidance, the agent often needs to learn from randomized trials to understand misleading correlations between exogenous insulin doses and BG levels, which can lead to instability and unsafety. To address these challenges, we propose an introspective RL based on Counterfactual Invertible Neural Networks (CINN). We use the pre-trained CINN as a frozen introspective block of the RL agent, which integrates forward prediction and counterfactual inference to guide the policy updates, promoting more stable and safer BG control. Constructed based on interpretable causal order, CINN employs bidirectional encoders with affine coupling layers to ensure invertibility while using orthogonal weight normalization to enhance the trainability, thereby ensuring the bidirectional differentiability of network parameters. We experimentally validate the accuracy and generalization ability of the pre-trained CINN in BG prediction and counterfactual inference for action. Furthermore, our experimental results highlight the effectiveness of pre-trained CINN in guiding RL policy updates for more accurate and safer BG control.
Related papers
- Privacy Preserved Blood Glucose Level Cross-Prediction: An Asynchronous Decentralized Federated Learning Approach [13.363740869325646]
Newly diagnosed Type 1 Diabetes (T1D) patients often struggle to obtain effective Blood Glucose (BG) prediction models.
We propose "GluADFL", blood Glucose prediction by Asynchronous Decentralized Federated Learning.
arXiv Detail & Related papers (2024-06-21T17:57:39Z) - Towards Understanding the Robustness of Diffusion-Based Purification: A Stochastic Perspective [65.10019978876863]
Diffusion-Based Purification (DBP) has emerged as an effective defense mechanism against adversarial attacks.
In this paper, we argue that the inherentity in the DBP process is the primary driver of its robustness.
arXiv Detail & Related papers (2024-04-22T16:10:38Z) - An Improved Strategy for Blood Glucose Control Using Multi-Step Deep Reinforcement Learning [3.5757761767474876]
Blood Glucose (BG) control involves keeping an individual's BG within a healthy range through extracorporeal insulin injections.
Recent research has been devoted to exploring individualized and automated BG control approaches.
Deep Reinforcement Learning (DRL) shows potential as an emerging approach.
arXiv Detail & Related papers (2024-03-12T11:53:00Z) - GARNN: An Interpretable Graph Attentive Recurrent Neural Network for
Predicting Blood Glucose Levels via Multivariate Time Series [12.618792803757714]
We propose interpretable graph attentive neural networks (GARNNs) to model multi-modal data.
GARNNs achieve the best prediction accuracy and provide high-quality temporal interpretability.
These findings underline the potential of GARNN as a robust tool for improving diabetes care.
arXiv Detail & Related papers (2024-02-26T01:18:53Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Offline Reinforcement Learning for Safer Blood Glucose Control in People
with Type 1 Diabetes [1.1859913430860336]
Online reinforcement learning (RL) has been utilised as a method for further enhancing glucose control in diabetes devices.
This paper examines the utility of BCQ, CQL and TD3-BC in managing the blood glucose of the 30 virtual patients available within the FDA-approved UVA/Padova glucose dynamics simulator.
offline RL can significantly increase time in the healthy blood glucose range from 61.6 +- 0.3% to 65.3 +/- 0.5% when compared to the strongest state-of-art baseline.
arXiv Detail & Related papers (2022-04-07T11:52:12Z) - Differentially private training of neural networks with Langevin
dynamics forcalibrated predictive uncertainty [58.730520380312676]
We show that differentially private gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models.
This represents a serious issue for safety-critical applications, e.g. in medical diagnosis.
arXiv Detail & Related papers (2021-07-09T08:14:45Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Towards Evaluating and Training Verifiably Robust Neural Networks [81.39994285743555]
We study the relationship between IBP and CROWN, and prove that CROWN is always tighter than IBP when choosing appropriate bounding lines.
We propose a relaxed version of CROWN, linear bound propagation (LBP), that can be used to verify large networks to obtain lower verified errors.
arXiv Detail & Related papers (2021-04-01T13:03:48Z) - Controlling Level of Unconsciousness by Titrating Propofol with Deep
Reinforcement Learning [5.276232626689567]
Reinforcement Learning can be used to fit a mapping from patient state to a medication regimen.
Deep RL replaces the table with a deep neural network and has been used to learn medication regimens from registry databases.
arXiv Detail & Related papers (2020-08-27T18:47:08Z) - Short Term Blood Glucose Prediction based on Continuous Glucose
Monitoring Data [53.01543207478818]
This study explores the use of Continuous Glucose Monitoring (CGM) data as input for digital decision support tools.
We investigate how Recurrent Neural Networks (RNNs) can be used for Short Term Blood Glucose (STBG) prediction.
arXiv Detail & Related papers (2020-02-06T16:39:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.