From algorithms to action: improving patient care requires causality
- URL: http://arxiv.org/abs/2209.07397v2
- Date: Mon, 1 Apr 2024 19:28:12 GMT
- Title: From algorithms to action: improving patient care requires causality
- Authors: Wouter A. C. van Amsterdam, Pim A. de Jong, Joost J. C. Verhoeff, Tim Leiner, Rajesh Ranganath,
- Abstract summary: Most outcome prediction models are developed and validated without regard to the causal aspects of treatment decision making.
Guidelines on prediction model validation and the checklist for risk model endorsement by the American Joint Committee on Cancer do not protect against prediction models that are accurate during development and validation but harmful when used for decision making.
- Score: 18.154976419582873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In cancer research there is much interest in building and validating outcome predicting outcomes to support treatment decisions. However, because most outcome prediction models are developed and validated without regard to the causal aspects of treatment decision making, many published outcome prediction models may cause harm when used for decision making, despite being found accurate in validation studies. Guidelines on prediction model validation and the checklist for risk model endorsement by the American Joint Committee on Cancer do not protect against prediction models that are accurate during development and validation but harmful when used for decision making. We explain why this is the case and how to build and validate models that are useful for decision making.
Related papers
- Domain constraints improve risk prediction when outcome data is missing [1.6840408099522377]
We show that a machine learning model can accurately estimate risk for both tested and untested patients.
We apply our model to a case study of cancer risk prediction, showing that the model's inferred risk predicts cancer diagnoses.
arXiv Detail & Related papers (2023-12-06T19:49:06Z) - When accurate prediction models yield harmful self-fulfilling prophecies [16.304160143287366]
We show that using prediction models for decision making can lead to harmful decisions.
Our main result is a formal characterization of a set of such prediction models.
These results point to the need to revise standard practices for validation, deployment and evaluation of prediction models.
arXiv Detail & Related papers (2023-12-02T19:39:50Z) - Safe AI for health and beyond -- Monitoring to transform a health
service [51.8524501805308]
We will assess the infrastructure required to monitor the outputs of a machine learning algorithm.
We will present two scenarios with examples of monitoring and updates of models.
arXiv Detail & Related papers (2023-03-02T17:27:45Z) - Counterfactual Prediction Under Outcome Measurement Error [29.071173441651734]
We study intersectional threats to model reliability introduced by outcome measurement error, treatment effects, and selection bias from historical decision-making policies.
We develop an unbiased risk minimization method which corrects for the combined effects of these challenges.
arXiv Detail & Related papers (2023-02-22T03:34:19Z) - Deep learning methods for drug response prediction in cancer:
predominant and emerging trends [50.281853616905416]
Exploiting computational predictive models to study and treat cancer holds great promise in improving drug development and personalized design of treatment plans.
A wave of recent papers demonstrates promising results in predicting cancer response to drug treatments while utilizing deep learning methods.
This review allows to better understand the current state of the field and identify major challenges and promising solution paths.
arXiv Detail & Related papers (2022-11-18T03:26:31Z) - What Do You See in this Patient? Behavioral Testing of Clinical NLP
Models [69.09570726777817]
We introduce an extendable testing framework that evaluates the behavior of clinical outcome models regarding changes of the input.
We show that model behavior varies drastically even when fine-tuned on the same data and that allegedly best-performing models have not always learned the most medically plausible patterns.
arXiv Detail & Related papers (2021-11-30T15:52:04Z) - Learning to Predict with Supporting Evidence: Applications to Clinical
Risk Prediction [9.199022926064009]
The impact of machine learning models on healthcare will depend on the degree of trust that healthcare professionals place in the predictions made by these models.
We present a method to provide people with clinical expertise with domain-relevant evidence about why a prediction should be trusted.
arXiv Detail & Related papers (2021-03-04T00:26:32Z) - A scoping review of causal methods enabling predictions under
hypothetical interventions [4.801185839732629]
When prediction models are used to support decision making, there is often a need for predicting outcomes under hypothetical interventions.
We systematically reviewed literature published by December 2019, considering papers in the health domain that used causal considerations to enable prediction models to be used for predictions under hypothetical interventions.
There exist two broad methodological approaches for allowing prediction under hypothetical intervention into clinical prediction models.
arXiv Detail & Related papers (2020-11-19T13:36:26Z) - When Does Uncertainty Matter?: Understanding the Impact of Predictive
Uncertainty in ML Assisted Decision Making [68.19284302320146]
We carry out user studies to assess how people with differing levels of expertise respond to different types of predictive uncertainty.
We found that showing posterior predictive distributions led to smaller disagreements with the ML model's predictions.
This suggests that posterior predictive distributions can potentially serve as useful decision aids which should be used with caution and take into account the type of distribution and the expertise of the human.
arXiv Detail & Related papers (2020-11-12T02:23:53Z) - UNITE: Uncertainty-based Health Risk Prediction Leveraging Multi-sourced
Data [81.00385374948125]
We present UNcertaInTy-based hEalth risk prediction (UNITE) model.
UNITE provides accurate disease risk prediction and uncertainty estimation leveraging multi-sourced health data.
We evaluate UNITE on real-world disease risk prediction tasks: nonalcoholic fatty liver disease (NASH) and Alzheimer's disease (AD)
UNITE achieves up to 0.841 in F1 score for AD detection, up to 0.609 in PR-AUC for NASH detection, and outperforms various state-of-the-art baselines by up to $19%$ over the best baseline.
arXiv Detail & Related papers (2020-10-22T02:28:11Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.