Prediction-Coherent LSTM-based Recurrent Neural Network for Safer
Glucose Predictions in Diabetic People
- URL: http://arxiv.org/abs/2009.03722v1
- Date: Tue, 8 Sep 2020 13:14:08 GMT
- Title: Prediction-Coherent LSTM-based Recurrent Neural Network for Safer
Glucose Predictions in Diabetic People
- Authors: Maxime De Bois, Moun\^im A. El Yacoubi, Mehdi Ammi
- Abstract summary: We propose a LSTM-based recurrent neural network architecture and loss function that enhance the stability of predictions.
The study is conducted on type 1 and type 2 diabetic people, with a focus on predictions made 30-minutes ahead of time.
- Score: 4.692400531340393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the context of time-series forecasting, we propose a LSTM-based recurrent
neural network architecture and loss function that enhance the stability of the
predictions. In particular, the loss function penalizes the model, not only on
the prediction error (mean-squared error), but also on the predicted variation
error.
We apply this idea to the prediction of future glucose values in diabetes,
which is a delicate task as unstable predictions can leave the patient in doubt
and make him/her take the wrong action, threatening his/her life. The study is
conducted on type 1 and type 2 diabetic people, with a focus on predictions
made 30-minutes ahead of time.
First, we confirm the superiority, in the context of glucose prediction, of
the LSTM model by comparing it to other state-of-the-art models (Extreme
Learning Machine, Gaussian Process regressor, Support Vector Regressor).
Then, we show the importance of making stable predictions by smoothing the
predictions made by the models, resulting in an overall improvement of the
clinical acceptability of the models at the cost in a slight loss in prediction
accuracy.
Finally, we show that the proposed approach, outperforms all baseline
results. More precisely, it trades a loss of 4.3\% in the prediction accuracy
for an improvement of the clinical acceptability of 27.1\%. When compared to
the moving average post-processing method, we show that the trade-off is more
efficient with our approach.
Related papers
- SepsisLab: Early Sepsis Prediction with Uncertainty Quantification and Active Sensing [67.8991481023825]
Sepsis is the leading cause of in-hospital mortality in the USA.
Existing predictive models are usually trained on high-quality data with few missing information.
For the potential high-risk patients with low confidence due to limited observations, we propose a robust active sensing algorithm.
arXiv Detail & Related papers (2024-07-24T04:47:36Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Machine Learning based prediction of Glucose Levels in Type 1 Diabetes
Patients with the use of Continuous Glucose Monitoring Data [0.0]
Continuous Glucose Monitoring (CGM) devices offer detailed, non-intrusive and real time insights into a patient's blood glucose concentrations.
Leveraging advanced Machine Learning (ML) Models as methods of prediction of future glucose levels, gives rise to substantial quality of life improvements.
arXiv Detail & Related papers (2023-02-24T19:10:40Z) - A Machine Learning Model for Predicting, Diagnosing, and Mitigating
Health Disparities in Hospital Readmission [0.0]
We propose a machine learning pipeline capable of making predictions as well as detecting and mitigating biases in the data and model predictions.
We evaluate the performance of the proposed method on a clinical dataset using accuracy and fairness measures.
arXiv Detail & Related papers (2022-06-13T16:07:25Z) - Learning to Predict Trustworthiness with Steep Slope Loss [69.40817968905495]
We study the problem of predicting trustworthiness on real-world large-scale datasets.
We observe that the trustworthiness predictors trained with prior-art loss functions are prone to view both correct predictions and incorrect predictions to be trustworthy.
We propose a novel steep slope loss to separate the features w.r.t. correct predictions from the ones w.r.t. incorrect predictions by two slide-like curves that oppose each other.
arXiv Detail & Related papers (2021-09-30T19:19:09Z) - When in Doubt: Neural Non-Parametric Uncertainty Quantification for
Epidemic Forecasting [70.54920804222031]
Most existing forecasting models disregard uncertainty quantification, resulting in mis-calibrated predictions.
Recent works in deep neural models for uncertainty-aware time-series forecasting also have several limitations.
We model the forecasting task as a probabilistic generative process and propose a functional neural process model called EPIFNP.
arXiv Detail & Related papers (2021-06-07T18:31:47Z) - Integration of Clinical Criteria into the Training of Deep Models:
Application to Glucose Prediction for Diabetic People [4.692400531340393]
We propose the coherent mean squared glycemic error (gcMSE) loss function.
It penalizes the model during its training not only of the prediction errors, but also on the predicted variation errors.
It makes possible to adjust the weighting of the different areas in the error space to better focus on dangerous regions.
arXiv Detail & Related papers (2020-09-21T15:05:28Z) - Enhancing the Interpretability of Deep Models in Heathcare Through
Attention: Application to Glucose Forecasting for Diabetic People [4.692400531340393]
We evaluate the RETAIN model on the type-2 IDIAB and the type-1 OhioT1DM datasets.
We show that the RETAIN model offers a very good compromise between accuracy and interpretability.
arXiv Detail & Related papers (2020-09-08T13:27:52Z) - Interpreting Deep Glucose Predictive Models for Diabetic People Using
RETAIN [4.692400531340393]
We study the RETAIN architecture for the forecasting of future glucose values for diabetic people.
Thanks to its two-level attention mechanism, the RETAIN model is interpretable while remaining as efficient as standard neural networks.
arXiv Detail & Related papers (2020-09-08T13:20:15Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z) - Short Term Blood Glucose Prediction based on Continuous Glucose
Monitoring Data [53.01543207478818]
This study explores the use of Continuous Glucose Monitoring (CGM) data as input for digital decision support tools.
We investigate how Recurrent Neural Networks (RNNs) can be used for Short Term Blood Glucose (STBG) prediction.
arXiv Detail & Related papers (2020-02-06T16:39:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.