Challenges for Reinforcement Learning in Healthcare
- URL: http://arxiv.org/abs/2103.05612v1
- Date: Tue, 9 Mar 2021 18:34:54 GMT
- Title: Challenges for Reinforcement Learning in Healthcare
- Authors: Elsa Riachi, Muhammad Mamdani, Michael Fralick, Frank Rudzicz
- Abstract summary: A reinforcement learning agent could be trained to provide treatment recommendations for physicians.
However, a number of difficulties arise when using RL beyond benchmark environments.
- Score: 13.569317350274408
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many healthcare decisions involve navigating through a multitude of treatment
options in a sequential and iterative manner to find an optimal treatment
pathway with the goal of an optimal patient outcome. Such optimization problems
may be amenable to reinforcement learning. A reinforcement learning agent could
be trained to provide treatment recommendations for physicians, acting as a
decision support tool. However, a number of difficulties arise when using RL
beyond benchmark environments, such as specifying the reward function, choosing
an appropriate state representation and evaluating the learned policy.
Related papers
- GEMeX-ThinkVG: Towards Thinking with Visual Grounding in Medical VQA via Reinforcement Learning [50.94508930739623]
Medical visual question answering aims to support clinical decision-making by enabling models to answer natural language questions based on medical images.<n>Current methods still suffer from limited answer reliability and poor interpretability, impairing the ability of clinicians and patients to understand and trust model-generated answers.<n>This work first proposes a Thinking with Visual Grounding dataset wherein the answer generation is decomposed into intermediate reasoning steps.<n>We introduce a novel verifiable reward mechanism for reinforcement learning to guide post-training, improving the alignment between the model's reasoning process and its final answer.
arXiv Detail & Related papers (2025-06-22T08:09:58Z) - Structured Outputs Enable General-Purpose LLMs to be Medical Experts [50.02627258858336]
Large language models (LLMs) often struggle with open-ended medical questions.
We propose a novel approach utilizing structured medical reasoning.
Our approach achieves the highest Factuality Score of 85.8, surpassing fine-tuned models.
arXiv Detail & Related papers (2025-03-05T05:24:55Z) - Pruning the Path to Optimal Care: Identifying Systematically Suboptimal Medical Decision-Making with Inverse Reinforcement Learning [14.688842697886484]
We present a novel application of Inverse Reinforcement Learning that identifies suboptimal clinician actions based on the actions of their peers.
This approach centers two stages of IRL with an intermediate step to prune trajectories displaying behavior that deviates significantly from the consensus.
arXiv Detail & Related papers (2024-11-07T23:16:59Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Safe and Interpretable Estimation of Optimal Treatment Regimes [54.257304443780434]
We operationalize a safe and interpretable framework to identify optimal treatment regimes.
Our findings support personalized treatment strategies based on a patient's medical history and pharmacological features.
arXiv Detail & Related papers (2023-10-23T19:59:10Z) - Optimal and Fair Encouragement Policy Evaluation and Learning [11.712023983596914]
We study causal identification and robust estimation of optimal treatment rules, including under potential violations of positivity.
We develop a two-stage algorithm for solving over parametrized policy classes under general constraints to obtain variance-sensitive regret bounds.
We illustrate the methods in three case studies based on data from reminders of SNAP benefits, randomized encouragement to enroll in insurance, and from pretrial supervised release with electronic monitoring.
arXiv Detail & Related papers (2023-09-12T20:45:30Z) - Learning Optimal Treatment Strategies for Sepsis Using Offline
Reinforcement Learning in Continuous Space [4.031538204818658]
We propose a new medical decision model based on historical data to help clinicians recommend the best reference option for real-time treatment.
Our model combines offline reinforcement learning with deep reinforcement learning to address the problem that traditional reinforcement learning in healthcare cannot interact with the environment.
arXiv Detail & Related papers (2022-06-22T16:17:21Z) - A Conservative Q-Learning approach for handling distribution shift in
sepsis treatment strategies [0.0]
There is no consensus on what interventions work best and different patients respond very differently to the same treatment.
Deep Reinforcement Learning methods can be used to come up with optimal policies for treatment strategies mirroring physician actions.
The policy learned could help clinicians in Intensive Care Units to make better decisions while treating septic patients and improve survival rate.
arXiv Detail & Related papers (2022-03-25T19:50:18Z) - Optimal discharge of patients from intensive care via a data-driven
policy learning framework [58.720142291102135]
It is important that the patient discharge task addresses the nuanced trade-off between decreasing a patient's length of stay and the risk of readmission or even death following the discharge decision.
This work introduces an end-to-end general framework for capturing this trade-off to recommend optimal discharge timing decisions.
A data-driven approach is used to derive a parsimonious, discrete state space representation that captures a patient's physiological condition.
arXiv Detail & Related papers (2021-12-17T04:39:33Z) - Learning MDPs from Features: Predict-Then-Optimize for Sequential
Decision Problems by Reinforcement Learning [52.74071439183113]
We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) solved via reinforcement learning.
Two significant computational challenges arise in applying decision-focused learning to MDPs.
arXiv Detail & Related papers (2021-06-06T23:53:31Z) - Near-optimal Individualized Treatment Recommendations [9.585155938486048]
individualized treatment recommendation (ITR) is an important analytic framework for precision medicine.
We propose two methods to estimate the optimal A-ITR within the outcome weighted learning (OWL) framework.
We show the consistency of these methods and obtain an upper bound for the risk between the theoretically optimal recommendation and the estimated one.
arXiv Detail & Related papers (2020-04-06T15:59:33Z) - Opportunities of a Machine Learning-based Decision Support System for
Stroke Rehabilitation Assessment [64.52563354823711]
Rehabilitation assessment is critical to determine an adequate intervention for a patient.
Current practices of assessment mainly rely on therapist's experience, and assessment is infrequently executed due to the limited availability of a therapist.
We developed an intelligent decision support system that can identify salient features of assessment using reinforcement learning.
arXiv Detail & Related papers (2020-02-27T17:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.