Exploiting Uncertainties from Ensemble Learners to Improve
Decision-Making in Healthcare AI
- URL: http://arxiv.org/abs/2007.06063v1
- Date: Sun, 12 Jul 2020 18:33:09 GMT
- Title: Exploiting Uncertainties from Ensemble Learners to Improve
Decision-Making in Healthcare AI
- Authors: Yingshui Tan, Baihong Jin, Xiangyu Yue, Yuxin Chen, Alberto
Sangiovanni Vincentelli
- Abstract summary: Ensemble learning is widely applied in Machine Learning (ML) to improve model performance and to mitigate decision risks.
We show that ensemble mean is preferable with respect to ensemble variance as an uncertainty metric for decision making.
- Score: 13.890527275215284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensemble learning is widely applied in Machine Learning (ML) to improve model
performance and to mitigate decision risks. In this approach, predictions from
a diverse set of learners are combined to obtain a joint decision. Recently,
various methods have been explored in literature for estimating decision
uncertainties using ensemble learning; however, determining which metrics are a
better fit for certain decision-making applications remains a challenging task.
In this paper, we study the following key research question in the selection of
uncertainty metrics: when does an uncertainty metric outperforms another? We
answer this question via a rigorous analysis of two commonly used uncertainty
metrics in ensemble learning, namely ensemble mean and ensemble variance. We
show that, under mild assumptions on the ensemble learners, ensemble mean is
preferable with respect to ensemble variance as an uncertainty metric for
decision making. We empirically validate our assumptions and theoretical
results via an extensive case study: the diagnosis of referable diabetic
retinopathy.
Related papers
- Decision-Focused Uncertainty Quantification [32.93992587758183]
We develop a framework based on conformal prediction to produce prediction sets that account for a downstream decision loss function.
We present a real-world use case in healthcare diagnosis, where our method effectively incorporates the hierarchical structure of dermatological diseases.
arXiv Detail & Related papers (2024-10-02T17:22:09Z) - Unified Uncertainty Estimation for Cognitive Diagnosis Models [70.46998436898205]
We propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models.
We decompose the uncertainty of diagnostic parameters into data aspect and model aspect.
Our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
arXiv Detail & Related papers (2024-03-09T13:48:20Z) - Empirical Validation of Conformal Prediction for Trustworthy Skin Lesions Classification [3.7305040207339286]
We develop Conformal Prediction, Monte Carlo Dropout, and Evidential Deep Learning approaches to assess uncertainty quantification in deep neural networks.
Results: The experimental results demonstrate a significant enhancement in uncertainty quantification with the utilization of the Conformal Prediction method.
Our conclusion highlights a robust and consistent performance of conformal prediction across diverse testing conditions.
arXiv Detail & Related papers (2023-12-12T17:37:16Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - On the Complexity of Adversarial Decision Making [101.14158787665252]
We show that the Decision-Estimation Coefficient is necessary and sufficient to obtain low regret for adversarial decision making.
We provide new structural results that connect the Decision-Estimation Coefficient to variants of other well-known complexity measures.
arXiv Detail & Related papers (2022-06-27T06:20:37Z) - A Comparative Study of Faithfulness Metrics for Model Interpretability
Methods [3.7200349581269996]
We introduce two assessment dimensions, namely diagnosticity and time complexity.
According to the experimental results, we find that sufficiency and comprehensiveness metrics have higher diagnosticity and lower time complexity than the other faithfulness metric.
arXiv Detail & Related papers (2022-04-12T04:02:17Z) - The Statistical Complexity of Interactive Decision Making [126.04974881555094]
We provide a complexity measure, the Decision-Estimation Coefficient, that is proven to be both necessary and sufficient for sample-efficient interactive learning.
A unified algorithm design principle, Estimation-to-Decisions (E2D), transforms any algorithm for supervised estimation into an online algorithm for decision making.
arXiv Detail & Related papers (2021-12-27T02:53:44Z) - Ensemble-based Uncertainty Quantification: Bayesian versus Credal
Inference [0.0]
We consider ensemble-based approaches to uncertainty quantification.
We specifically focus on Bayesian methods and approaches based on so-called credal sets.
The effectiveness of corresponding measures is evaluated and compared in an empirical study on classification with a reject option.
arXiv Detail & Related papers (2021-07-21T22:47:24Z) - Proximal Learning for Individualized Treatment Regimes Under Unmeasured
Confounding [3.020737957610002]
We develop approaches to estimating optimal individualized treatment regimes (ITRs) in the presence of unmeasured confounding.
Based on these results, we propose several classification-based approaches to finding a variety of restricted in-class optimal ITRs.
arXiv Detail & Related papers (2021-05-03T21:49:49Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.