Updating Clinical Risk Stratification Models Using Rank-Based
Compatibility: Approaches for Evaluating and Optimizing Clinician-Model Team
Performance
- URL: http://arxiv.org/abs/2308.05619v1
- Date: Thu, 10 Aug 2023 15:08:13 GMT
- Title: Updating Clinical Risk Stratification Models Using Rank-Based
Compatibility: Approaches for Evaluating and Optimizing Clinician-Model Team
Performance
- Authors: Erkin \"Otle\c{s}, Brian T. Denton, Jenna Wiens
- Abstract summary: We propose a rank-based compatibility measure, $CR$, and a new loss function that aims to optimize discriminative performance while encouraging good compatibility.
This work provides new tools to analyze and update risk stratification models used in clinical care.
- Score: 11.31203699519559
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As data shift or new data become available, updating clinical machine
learning models may be necessary to maintain or improve performance over time.
However, updating a model can introduce compatibility issues when the behavior
of the updated model does not align with user expectations, resulting in poor
user-model team performance. Existing compatibility measures depend on model
decision thresholds, limiting their applicability in settings where models are
used to generate rankings based on estimated risk. To address this limitation,
we propose a novel rank-based compatibility measure, $C^R$, and a new loss
function that aims to optimize discriminative performance while encouraging
good compatibility. Applied to a case study in mortality risk stratification
leveraging data from MIMIC, our approach yields more compatible models while
maintaining discriminative performance compared to existing model selection
techniques, with an increase in $C^R$ of $0.019$ ($95\%$ confidence interval:
$0.005$, $0.035$). This work provides new tools to analyze and update risk
stratification models used in clinical care.
Related papers
- Towards Stable Machine Learning Model Retraining via Slowly Varying Sequences [6.067007470552307]
We propose a methodology for finding sequences of machine learning models that are stable across retraining iterations.
We develop a mixed-integer optimization formulation that is guaranteed to recover optimal models.
Our method shows stronger stability than greedily trained models with a small, controllable sacrifice in predictive power.
arXiv Detail & Related papers (2024-03-28T22:45:38Z) - Deep autoregressive density nets vs neural ensembles for model-based
offline reinforcement learning [2.9158689853305693]
We consider a model-based reinforcement learning algorithm that infers the system dynamics from the available data and performs policy optimization on imaginary model rollouts.
This approach is vulnerable to exploiting model errors which can lead to catastrophic failures on the real system.
We show that better performance can be obtained with a single well-calibrated autoregressive model on the D4RL benchmark.
arXiv Detail & Related papers (2024-02-05T10:18:15Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Stable Training of Probabilistic Models Using the Leave-One-Out Maximum Log-Likelihood Objective [0.7373617024876725]
Kernel density estimation (KDE) based models are popular choices for this task, but they fail to adapt to data regions with varying densities.
An adaptive KDE model is employed to circumvent this, where each kernel in the model has an individual bandwidth.
A modified expectation-maximization algorithm is employed to accelerate the optimization speed reliably.
arXiv Detail & Related papers (2023-10-05T14:08:42Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - The effectiveness of factorization and similarity blending [0.0]
Collaborative Filtering (CF) is a technique which allows to leverage past users' preferences data to identify behavioural patterns and exploit them to predict custom recommendations.
We show that blending factorization-based and similarity-based approaches can lead to a significant error decrease (-9.4%) on stand-alone models.
We propose a novel extension of a similarity model, SCSR, which consistently reduce the complexity of the original algorithm.
arXiv Detail & Related papers (2022-09-16T13:11:27Z) - Sample-Efficient Reinforcement Learning via Conservative Model-Based
Actor-Critic [67.00475077281212]
Model-based reinforcement learning algorithms are more sample efficient than their model-free counterparts.
We propose a novel approach that achieves high sample efficiency without the strong reliance on accurate learned models.
We show that CMBAC significantly outperforms state-of-the-art approaches in terms of sample efficiency on several challenging tasks.
arXiv Detail & Related papers (2021-12-16T15:33:11Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - On the model-based stochastic value gradient for continuous
reinforcement learning [50.085645237597056]
We show that simple model-based agents can outperform state-of-the-art model-free agents in terms of both sample-efficiency and final reward.
Our findings suggest that model-based policy evaluation deserves closer attention.
arXiv Detail & Related papers (2020-08-28T17:58:29Z) - An Empirical Analysis of Backward Compatibility in Machine Learning
Systems [47.04803977692586]
We consider how updates, intended to improve ML models, can introduce new errors that can significantly affect downstream systems and users.
For example, updates in models used in cloud-based classification services, such as image recognition, can cause unexpected erroneous behavior.
arXiv Detail & Related papers (2020-08-11T08:10:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.