Offline reinforcement learning with uncertainty for treatment strategies
in sepsis
- URL: http://arxiv.org/abs/2107.04491v1
- Date: Fri, 9 Jul 2021 15:29:05 GMT
- Title: Offline reinforcement learning with uncertainty for treatment strategies
in sepsis
- Authors: Ran Liu (1 and 2), Joseph L. Greenstein (1 and 2), James C. Fackler
(3), Jules Bergmann (3), Melania M. Bembea (3 and 4), Raimond L. Winslow (1
and 2) ((1) Institute for Computational Medicine, the Johns Hopkins
University, (2) Department of Biomedical Engineering, the Johns Hopkins
University School of Medicine and Whiting School of Engineering, (3)
Department of Anesthesiology and Critical Care Medicine, the Johns Hopkins
University, (4) Department of Pediatrics, the Johns Hopkins University School
of Medicine)
- Abstract summary: We present a novel application of reinforcement learning in which we identify optimal recommendations for sepsis treatment from data.
Rather than a single recommendation, our method can present several treatment options.
We examine learned policies and discover that reinforcement learning is biased against aggressive intervention due to the confounding relationship between mortality and level of treatment received.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Guideline-based treatment for sepsis and septic shock is difficult because
sepsis is a disparate range of life-threatening organ dysfunctions whose
pathophysiology is not fully understood. Early intervention in sepsis is
crucial for patient outcome, yet those interventions have adverse effects and
are frequently overadministered. Greater personalization is necessary, as no
single action is suitable for all patients. We present a novel application of
reinforcement learning in which we identify optimal recommendations for sepsis
treatment from data, estimate their confidence level, and identify treatment
options infrequently observed in training data. Rather than a single
recommendation, our method can present several treatment options. We examine
learned policies and discover that reinforcement learning is biased against
aggressive intervention due to the confounding relationship between mortality
and level of treatment received. We mitigate this bias using subspace learning,
and develop methodology that can yield more accurate learning policies across
healthcare applications.
Related papers
- Identifying Differential Patient Care Through Inverse Intent Inference [3.4150521058470664]
Sepsis is a life-threatening condition defined by end-organ dysfunction due to a dysregulated host response to infection.
It has been reported in numerous studies that disparities in care exist across the trajectory of patient stay in the emergency department and intensive care unit.
arXiv Detail & Related papers (2024-11-11T21:21:32Z) - SepsisLab: Early Sepsis Prediction with Uncertainty Quantification and Active Sensing [67.8991481023825]
Sepsis is the leading cause of in-hospital mortality in the USA.
Existing predictive models are usually trained on high-quality data with few missing information.
For the potential high-risk patients with low confidence due to limited observations, we propose a robust active sensing algorithm.
arXiv Detail & Related papers (2024-07-24T04:47:36Z) - Safe and Interpretable Estimation of Optimal Treatment Regimes [54.257304443780434]
We operationalize a safe and interpretable framework to identify optimal treatment regimes.
Our findings support personalized treatment strategies based on a patient's medical history and pharmacological features.
arXiv Detail & Related papers (2023-10-23T19:59:10Z) - Learning Optimal Treatment Strategies for Sepsis Using Offline
Reinforcement Learning in Continuous Space [4.031538204818658]
We propose a new medical decision model based on historical data to help clinicians recommend the best reference option for real-time treatment.
Our model combines offline reinforcement learning with deep reinforcement learning to address the problem that traditional reinforcement learning in healthcare cannot interact with the environment.
arXiv Detail & Related papers (2022-06-22T16:17:21Z) - A Conservative Q-Learning approach for handling distribution shift in
sepsis treatment strategies [0.0]
There is no consensus on what interventions work best and different patients respond very differently to the same treatment.
Deep Reinforcement Learning methods can be used to come up with optimal policies for treatment strategies mirroring physician actions.
The policy learned could help clinicians in Intensive Care Units to make better decisions while treating septic patients and improve survival rate.
arXiv Detail & Related papers (2022-03-25T19:50:18Z) - Federated Cycling (FedCy): Semi-supervised Federated Learning of
Surgical Phases [57.90226879210227]
FedCy is a semi-supervised learning (FSSL) method that combines FL and self-supervised learning to exploit a decentralized dataset of both labeled and unlabeled videos.
We demonstrate significant performance gains over state-of-the-art FSSL methods on the task of automatic recognition of surgical phases.
arXiv Detail & Related papers (2022-03-14T17:44:53Z) - Optimal discharge of patients from intensive care via a data-driven
policy learning framework [58.720142291102135]
It is important that the patient discharge task addresses the nuanced trade-off between decreasing a patient's length of stay and the risk of readmission or even death following the discharge decision.
This work introduces an end-to-end general framework for capturing this trade-off to recommend optimal discharge timing decisions.
A data-driven approach is used to derive a parsimonious, discrete state space representation that captures a patient's physiological condition.
arXiv Detail & Related papers (2021-12-17T04:39:33Z) - SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event
Data [83.50281440043241]
We study the problem of inferring heterogeneous treatment effects from time-to-event data.
We propose a novel deep learning method for treatment-specific hazard estimation based on balancing representations.
arXiv Detail & Related papers (2021-10-26T20:13:17Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - Unifying Cardiovascular Modelling with Deep Reinforcement Learning for
Uncertainty Aware Control of Sepsis Treatment [0.2399911126932526]
There is no universally agreed upon strategy for vasopressor and fluid administration.
Sepsis is the leading cause of mortality in the ICU, responsible for 6% of all hospitalizations and 35% of all in-hospital deaths in USA.
We propose a novel approach, exploiting and unifying complementary strengths of Mathematical Modelling, Deep Learning, Reinforcement Learning and Uncertainty Quantification.
arXiv Detail & Related papers (2021-01-21T07:32:02Z) - Optimizing Medical Treatment for Sepsis in Intensive Care: from
Reinforcement Learning to Pre-Trial Evaluation [2.908482270923597]
Our aim is to establish a framework where reinforcement learning (RL) of optimizing interventions retrospectively allows us a regulatory compliant pathway to prospective clinical testing of the learned policies.
We focus on infections in intensive care units which are one of the major causes of death and difficult to treat because of the complex and opaque patient dynamics.
arXiv Detail & Related papers (2020-03-13T20:31:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.