POETREE: Interpretable Policy Learning with Adaptive Decision Trees
- URL: http://arxiv.org/abs/2203.08057v1
- Date: Tue, 15 Mar 2022 16:50:52 GMT
- Title: POETREE: Interpretable Policy Learning with Adaptive Decision Trees
- Authors: Aliz\'ee Pace, Alex J. Chan, Mihaela van der Schaar
- Abstract summary: POETREE is a novel framework for interpretable policy learning.
It builds probabilistic tree policies determining physician actions based on patients' observations and medical history.
It outperforms the state-of-the-art on real and synthetic medical datasets.
- Score: 78.6363825307044
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Building models of human decision-making from observed behaviour is critical
to better understand, diagnose and support real-world policies such as clinical
care. As established policy learning approaches remain focused on imitation
performance, they fall short of explaining the demonstrated decision-making
process. Policy Extraction through decision Trees (POETREE) is a novel
framework for interpretable policy learning, compatible with fully-offline and
partially-observable clinical decision environments -- and builds probabilistic
tree policies determining physician actions based on patients' observations and
medical history. Fully-differentiable tree architectures are grown
incrementally during optimization to adapt their complexity to the modelling
task, and learn a representation of patient history through recurrence,
resulting in decision tree policies that adapt over time with patient
information. This policy learning method outperforms the state-of-the-art on
real and synthetic medical datasets, both in terms of understanding,
quantifying and evaluating observed behaviour as well as in accurately
replicating it -- with potential to improve future decision support systems.
Related papers
- Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - Contextualized Policy Recovery: Modeling and Interpreting Medical Decisions with Adaptive Imitation Learning [39.093299601701474]
Interpretable policy learning seeks to estimate intelligible decision policies from observed actions.
Existing approaches are burdened by this tradeoff because they represent the underlying decision process as a universal policy.
We develop Contextualized Policy Recovery (CPR), which re-frames the problem of modeling complex decision processes as a multi-task learning problem.
arXiv Detail & Related papers (2023-10-11T22:17:37Z) - TREEMENT: Interpretable Patient-Trial Matching via Personalized Dynamic
Tree-Based Memory Network [54.332862955411656]
Clinical trials are critical for drug development but often suffer from expensive and inefficient patient recruitment.
In recent years, machine learning models have been proposed for speeding up patient recruitment via automatically matching patients with clinical trials.
We introduce a dynamic tree-based memory network model named TREEMENT to provide accurate and interpretable patient trial matching.
arXiv Detail & Related papers (2023-07-19T12:35:09Z) - Assisting clinical practice with fuzzy probabilistic decision trees [2.0999441362198907]
We propose FPT, a novel method that combines probabilistic trees and fuzzy logic to assist clinical practice.
We show that FPT and its predictions can assist clinical practice in an intuitive manner, with the use of a user-friendly interface specifically designed for this purpose.
arXiv Detail & Related papers (2023-04-16T14:05:16Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - What Do You See in this Patient? Behavioral Testing of Clinical NLP
Models [69.09570726777817]
We introduce an extendable testing framework that evaluates the behavior of clinical outcome models regarding changes of the input.
We show that model behavior varies drastically even when fine-tuned on the same data and that allegedly best-performing models have not always learned the most medically plausible patterns.
arXiv Detail & Related papers (2021-11-30T15:52:04Z) - The Medkit-Learn(ing) Environment: Medical Decision Modelling through
Simulation [81.72197368690031]
We present a new benchmarking suite designed specifically for medical sequential decision making.
The Medkit-Learn(ing) Environment is a publicly available Python package providing simple and easy access to high-fidelity synthetic medical data.
arXiv Detail & Related papers (2021-06-08T10:38:09Z) - An Empirical Study of Representation Learning for Reinforcement Learning
in Healthcare [19.50370829781689]
We use data from septic patients in the MIMIC-III dataset to form representations of a patient state.
We find that sequentially formed state representations facilitate effective policy learning in batch settings.
arXiv Detail & Related papers (2020-11-23T06:37:08Z) - Optimizing Medical Treatment for Sepsis in Intensive Care: from
Reinforcement Learning to Pre-Trial Evaluation [2.908482270923597]
Our aim is to establish a framework where reinforcement learning (RL) of optimizing interventions retrospectively allows us a regulatory compliant pathway to prospective clinical testing of the learned policies.
We focus on infections in intensive care units which are one of the major causes of death and difficult to treat because of the complex and opaque patient dynamics.
arXiv Detail & Related papers (2020-03-13T20:31:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.