Pruning the Way to Reliable Policies: A Multi-Objective Deep Q-Learning
Approach to Critical Care
- URL: http://arxiv.org/abs/2306.08044v2
- Date: Thu, 13 Jul 2023 20:23:43 GMT
- Title: Pruning the Way to Reliable Policies: A Multi-Objective Deep Q-Learning
Approach to Critical Care
- Authors: Ali Shirali, Alexander Schubert, Ahmed Alaa
- Abstract summary: We introduce a deep Q-learning approach able to obtain more reliable critical care policies.
We achieve this by first pruning the action set based on all available rewards, and second training a final model based on the sparse main reward but with a restricted action set.
- Score: 68.8204255655161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most medical treatment decisions are sequential in nature. Hence, there is
substantial hope that reinforcement learning may make it possible to formulate
precise data-driven treatment plans. However, a key challenge for most
applications in this field is the sparse nature of primarily mortality-based
reward functions, leading to decreased stability of offline estimates. In this
work, we introduce a deep Q-learning approach able to obtain more reliable
critical care policies. This method integrates relevant but noisy intermediate
biomarker signals into the reward specification, without compromising the
optimization of the main outcome of interest (e.g. patient survival). We
achieve this by first pruning the action set based on all available rewards,
and second training a final model based on the sparse main reward but with a
restricted action set. By disentangling accurate and approximated rewards
through action pruning, potential distortions of the main objective are
minimized, all while enabling the extraction of valuable information from
intermediate signals that can guide the learning process. We evaluate our
method in both off-policy and offline settings using simulated environments and
real health records of patients in intensive care units. Our empirical results
indicate that pruning significantly reduces the size of the action space while
staying mostly consistent with the actions taken by physicians, outperforming
the current state-of-the-art offline reinforcement learning method conservative
Q-learning. Our work is a step towards developing reliable policies by
effectively harnessing the wealth of available information in data-intensive
critical care environments.
Related papers
- OMG-RL:Offline Model-based Guided Reward Learning for Heparin Treatment [0.4998632546280975]
This study focuses on developing a reward function that reflects the clinician's intentions.
We learn a parameterized reward function that includes the expert's intentions from limited data.
This approach can be broadly utilized not only for the heparin dosing problem but also for RL-based medication dosing tasks in general.
arXiv Detail & Related papers (2024-09-20T07:51:37Z) - Sample Complexity of Preference-Based Nonparametric Off-Policy
Evaluation with Deep Networks [58.469818546042696]
We study the sample efficiency of OPE with human preference and establish a statistical guarantee for it.
By appropriately selecting the size of a ReLU network, we show that one can leverage any low-dimensional manifold structure in the Markov decision process.
arXiv Detail & Related papers (2023-10-16T16:27:06Z) - Deep Offline Reinforcement Learning for Real-world Treatment
Optimization Applications [3.770564448216192]
We introduce a practical and theoretically grounded transition sampling approach to address action imbalance during offline RL training.
We perform extensive experiments on two real-world tasks for diabetes and sepsis treatment optimization.
Across a range of principled and clinically relevant metrics, we show that our proposed approach enables substantial improvements in expected health outcomes.
arXiv Detail & Related papers (2023-02-15T09:30:57Z) - Offline Reinforcement Learning with Instrumental Variables in Confounded
Markov Decision Processes [93.61202366677526]
We study the offline reinforcement learning (RL) in the face of unmeasured confounders.
We propose various policy learning methods with the finite-sample suboptimality guarantee of finding the optimal in-class policy.
arXiv Detail & Related papers (2022-09-18T22:03:55Z) - Uncertainty-Based Offline Reinforcement Learning with Diversified
Q-Ensemble [16.92791301062903]
We propose an uncertainty-based offline RL method that takes into account the confidence of the Q-value prediction and does not require any estimation or sampling of the data distribution.
Surprisingly, we find that it is possible to substantially outperform existing offline RL methods on various tasks by simply increasing the number of Q-networks along with the clipped Q-learning.
arXiv Detail & Related papers (2021-10-04T16:40:13Z) - Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning [63.53407136812255]
Offline Reinforcement Learning promises to learn effective policies from previously-collected, static datasets without the need for exploration.
Existing Q-learning and actor-critic based off-policy RL algorithms fail when bootstrapping from out-of-distribution (OOD) actions or states.
We propose Uncertainty Weighted Actor-Critic (UWAC), an algorithm that detects OOD state-action pairs and down-weights their contribution in the training objectives accordingly.
arXiv Detail & Related papers (2021-05-17T20:16:46Z) - Non-asymptotic Confidence Intervals of Off-policy Evaluation: Primal and
Dual Bounds [21.520045697447372]
Off-policy evaluation (OPE) is the task of estimating the expected reward of a given policy based on offline data previously collected under different policies.
This work considers the problem of constructing non-asymptotic confidence intervals in infinite-horizon off-policy evaluation.
We develop a practical algorithm through a primal-dual optimization-based approach.
arXiv Detail & Related papers (2021-03-09T22:31:20Z) - Scalable Bayesian Inverse Reinforcement Learning [93.27920030279586]
We introduce Approximate Variational Reward Imitation Learning (AVRIL)
Our method addresses the ill-posed nature of the inverse reinforcement learning problem.
Applying our method to real medical data alongside classic control simulations, we demonstrate Bayesian reward inference in environments beyond the scope of current methods.
arXiv Detail & Related papers (2021-02-12T12:32:02Z) - Semi-Supervised Off Policy Reinforcement Learning [3.48396189165489]
Health-outcome information is often not well coded but rather embedded in clinical notes.
We propose a semi-supervised learning (SSL) approach that efficiently leverages a small sized labeled data with true outcome observed, and a large unlabeled data with outcome surrogates.
Our method is at least as efficient as the supervised approach, and moreover safe as it robust to mis-specification of the imputation models.
arXiv Detail & Related papers (2020-12-09T00:59:12Z) - Reliable Off-policy Evaluation for Reinforcement Learning [53.486680020852724]
In a sequential decision-making problem, off-policy evaluation estimates the expected cumulative reward of a target policy.
We propose a novel framework that provides robust and optimistic cumulative reward estimates using one or multiple logged data.
arXiv Detail & Related papers (2020-11-08T23:16:19Z) - Optimizing Medical Treatment for Sepsis in Intensive Care: from
Reinforcement Learning to Pre-Trial Evaluation [2.908482270923597]
Our aim is to establish a framework where reinforcement learning (RL) of optimizing interventions retrospectively allows us a regulatory compliant pathway to prospective clinical testing of the learned policies.
We focus on infections in intensive care units which are one of the major causes of death and difficult to treat because of the complex and opaque patient dynamics.
arXiv Detail & Related papers (2020-03-13T20:31:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.