Evaluating Probabilistic Classifiers: The Triptych
- URL: http://arxiv.org/abs/2301.10803v1
- Date: Wed, 25 Jan 2023 19:35:23 GMT
- Title: Evaluating Probabilistic Classifiers: The Triptych
- Authors: Timo Dimitriadis, Tilmann Gneiting, Alexander I. Jordan, Peter Vogel
- Abstract summary: We propose and study a triptych of diagnostic graphics that focus on distinct and complementary aspects of forecast performance.
The reliability diagram addresses calibration, the receiver operating characteristic (ROC) curve diagnoses discrimination ability, and the Murphy diagram visualizes overall predictive performance and value.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Probability forecasts for binary outcomes, often referred to as probabilistic
classifiers or confidence scores, are ubiquitous in science and society, and
methods for evaluating and comparing them are in great demand. We propose and
study a triptych of diagnostic graphics that focus on distinct and
complementary aspects of forecast performance: The reliability diagram
addresses calibration, the receiver operating characteristic (ROC) curve
diagnoses discrimination ability, and the Murphy diagram visualizes overall
predictive performance and value. A Murphy curve shows a forecast's mean
elementary scores, including the widely used misclassification rate, and the
area under a Murphy curve equals the mean Brier score. For a calibrated
forecast, the reliability curve lies on the diagonal, and for competing
calibrated forecasts, the ROC and Murphy curves share the same number of
crossing points. We invoke the recently developed CORP (Consistent, Optimally
binned, Reproducible, and Pool-Adjacent-Violators (PAV) algorithm based)
approach to craft reliability diagrams and decompose a mean score into
miscalibration (MCB), discrimination (DSC), and uncertainty (UNC) components.
Plots of the DSC measure of discrimination ability versus the calibration
metric MCB visualize classifier performance across multiple competitors. The
proposed tools are illustrated in empirical examples from astrophysics,
economics, and social science.
Related papers
- The Certainty Ratio $C_ρ$: a novel metric for assessing the reliability of classifier predictions [0.0]
This paper introduces the Certainty Ratio ($C_rho$), a novel metric designed to quantify the contribution of confident (certain) versus uncertain predictions to any classification performance measure.
Experimental results across 26 datasets and multiple classifiers, including Decision Trees, Naive-Bayes, 3-Nearest Neighbors, and Random Forests, demonstrate that $C_rho$rho reveals critical insights that conventional metrics often overlook.
arXiv Detail & Related papers (2024-11-04T10:50:03Z) - Calibrated Probabilistic Forecasts for Arbitrary Sequences [58.54729945445505]
Real-world data streams can change unpredictably due to distribution shifts, feedback loops and adversarial actors.
We present a forecasting framework ensuring valid uncertainty estimates regardless of how data evolves.
arXiv Detail & Related papers (2024-09-27T21:46:42Z) - Probabilistic Scores of Classifiers, Calibration is not Enough [0.32985979395737786]
In binary classification tasks, accurate representation of probabilistic predictions is essential for various real-world applications.
In this study, we highlight approaches that prioritize the alignment between predicted scores and true probability distributions.
Our findings reveal limitations in traditional calibration metrics, which could undermine the reliability of predictive models for critical decision-making.
arXiv Detail & Related papers (2024-08-06T19:53:00Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Never mind the metrics -- what about the uncertainty? Visualising
confusion matrix metric distributions [6.566615606042994]
This paper strives for a more balanced perspective on classifier performance metrics by highlighting their distributions under different models of uncertainty.
We develop equations, animations and interactive visualisations of the contours of performance metrics within (and beyond) this ROC space.
Our hope is that these insights and visualisations will raise greater awareness of the substantial uncertainty in performance metric estimates.
arXiv Detail & Related papers (2022-06-05T11:54:59Z) - Random Noise vs State-of-the-Art Probabilistic Forecasting Methods : A
Case Study on CRPS-Sum Discrimination Ability [4.9449660544238085]
We show that the statistical properties of target data affect the discrimination ability of CRPS-Sum.
We highlight that CRPS-Sum calculation overlooks the performance of the model on each dimension.
We show that it is easily possible to have a better CRPS-Sum for a dummy model, which looks like random noise.
arXiv Detail & Related papers (2022-01-21T12:36:58Z) - Probabilistic Gradient Boosting Machines for Large-Scale Probabilistic
Regression [51.770998056563094]
Probabilistic Gradient Boosting Machines (PGBM) is a method to create probabilistic predictions with a single ensemble of decision trees.
We empirically demonstrate the advantages of PGBM compared to existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-03T08:32:13Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Evaluating probabilistic classifiers: Reliability diagrams and score
decompositions revisited [68.8204255655161]
We introduce the CORP approach, which generates provably statistically Consistent, Optimally binned, and Reproducible reliability diagrams in an automated way.
Corpor is based on non-parametric isotonic regression and implemented via the Pool-adjacent-violators (PAV) algorithm.
arXiv Detail & Related papers (2020-08-07T08:22:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.