Never mind the metrics -- what about the uncertainty? Visualising
confusion matrix metric distributions
- URL: http://arxiv.org/abs/2206.02157v1
- Date: Sun, 5 Jun 2022 11:54:59 GMT
- Title: Never mind the metrics -- what about the uncertainty? Visualising
confusion matrix metric distributions
- Authors: David Lovell, Dimity Miller, Jaiden Capra and Andrew Bradley
- Abstract summary: This paper strives for a more balanced perspective on classifier performance metrics by highlighting their distributions under different models of uncertainty.
We develop equations, animations and interactive visualisations of the contours of performance metrics within (and beyond) this ROC space.
Our hope is that these insights and visualisations will raise greater awareness of the substantial uncertainty in performance metric estimates.
- Score: 6.566615606042994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There are strong incentives to build models that demonstrate outstanding
predictive performance on various datasets and benchmarks. We believe these
incentives risk a narrow focus on models and on the performance metrics used to
evaluate and compare them -- resulting in a growing body of literature to
evaluate and compare metrics. This paper strives for a more balanced
perspective on classifier performance metrics by highlighting their
distributions under different models of uncertainty and showing how this
uncertainty can easily eclipse differences in the empirical performance of
classifiers. We begin by emphasising the fundamentally discrete nature of
empirical confusion matrices and show how binary matrices can be meaningfully
represented in a three dimensional compositional lattice, whose cross-sections
form the basis of the space of receiver operating characteristic (ROC) curves.
We develop equations, animations and interactive visualisations of the contours
of performance metrics within (and beyond) this ROC space, showing how some are
affected by class imbalance. We provide interactive visualisations that show
the discrete posterior predictive probability mass functions of true and false
positive rates in ROC space, and how these relate to uncertainty in performance
metrics such as Balanced Accuracy (BA) and the Matthews Correlation Coefficient
(MCC). Our hope is that these insights and visualisations will raise greater
awareness of the substantial uncertainty in performance metric estimates that
can arise when classifiers are evaluated on empirical datasets and benchmarks,
and that classification model performance claims should be tempered by this
understanding.
Related papers
- Analyzing Generative Models by Manifold Entropic Metrics [8.477943884416023]
We introduce a novel set of tractable information-theoretic evaluation metrics.
We compare various normalizing flow architectures and $beta$-VAEs on the EMNIST dataset.
The most interesting finding of our experiments is a ranking of model architectures and training procedures in terms of their inductive bias to converge to aligned and disentangled representations during training.
arXiv Detail & Related papers (2024-10-25T09:35:00Z) - Measuring Orthogonality in Representations of Generative Models [81.13466637365553]
In unsupervised representation learning, models aim to distill essential features from high-dimensional data into lower-dimensional learned representations.
Disentanglement of independent generative processes has long been credited with producing high-quality representations.
We propose two novel metrics: Importance-Weighted Orthogonality (IWO) and Importance-Weighted Rank (IWR)
arXiv Detail & Related papers (2024-07-04T08:21:54Z) - Exploiting Observation Bias to Improve Matrix Completion [16.57405742112833]
We consider a variant of matrix completion where entries are revealed in a biased manner.
The goal is to exploit the shared information between the bias and the outcome of interest to improve predictions.
We find that with this two-stage algorithm, the estimates have 30x smaller mean squared error compared to traditional matrix completion methods.
arXiv Detail & Related papers (2023-06-07T20:48:35Z) - Evaluating Probabilistic Classifiers: The Triptych [62.997667081978825]
We propose and study a triptych of diagnostic graphics that focus on distinct and complementary aspects of forecast performance.
The reliability diagram addresses calibration, the receiver operating characteristic (ROC) curve diagnoses discrimination ability, and the Murphy diagram visualizes overall predictive performance and value.
arXiv Detail & Related papers (2023-01-25T19:35:23Z) - Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
Processes [52.92110730286403]
It is commonly believed that the marginal likelihood should be reminiscent of cross-validation metrics and that both should deteriorate with larger input dimensions.
We prove that by tuning hyper parameters, the performance, as measured by the marginal likelihood, improves monotonically with the input dimension.
We also prove that cross-validation metrics exhibit qualitatively different behavior that is characteristic of double descent.
arXiv Detail & Related papers (2022-10-14T08:09:33Z) - Uncertainty in Contrastive Learning: On the Predictability of Downstream
Performance [7.411571833582691]
We study whether the uncertainty of such a representation can be quantified for a single datapoint in a meaningful way.
We show that this goal can be achieved by directly estimating the distribution of the training data in the embedding space.
arXiv Detail & Related papers (2022-07-19T15:44:59Z) - Scalable Intervention Target Estimation in Linear Models [52.60799340056917]
Current approaches to causal structure learning either work with known intervention targets or use hypothesis testing to discover the unknown intervention targets.
This paper proposes a scalable and efficient algorithm that consistently identifies all intervention targets.
The proposed algorithm can be used to also update a given observational Markov equivalence class into the interventional Markov equivalence class.
arXiv Detail & Related papers (2021-11-15T03:16:56Z) - Doing Great at Estimating CATE? On the Neglected Assumptions in
Benchmark Comparisons of Treatment Effect Estimators [91.3755431537592]
We show that even in arguably the simplest setting, estimation under ignorability assumptions can be misleading.
We consider two popular machine learning benchmark datasets for evaluation of heterogeneous treatment effect estimators.
We highlight that the inherent characteristics of the benchmark datasets favor some algorithms over others.
arXiv Detail & Related papers (2021-07-28T13:21:27Z) - Accuracy on the Line: On the Strong Correlation Between
Out-of-Distribution and In-Distribution Generalization [89.73665256847858]
We show that out-of-distribution performance is strongly correlated with in-distribution performance for a wide range of models and distribution shifts.
Specifically, we demonstrate strong correlations between in-distribution and out-of-distribution performance on variants of CIFAR-10 & ImageNet.
We also investigate cases where the correlation is weaker, for instance some synthetic distribution shifts from CIFAR-10-C and the tissue classification dataset Camelyon17-WILDS.
arXiv Detail & Related papers (2021-07-09T19:48:23Z) - Beyond Marginal Uncertainty: How Accurately can Bayesian Regression
Models Estimate Posterior Predictive Correlations? [13.127549105535623]
It is often more useful to estimate predictive correlations between the function values at different input locations.
We first consider a downstream task which depends on posterior predictive correlations: transductive active learning (TAL)
Since TAL is too expensive and indirect to guide development of algorithms, we introduce two metrics which more directly evaluate the predictive correlations.
arXiv Detail & Related papers (2020-11-06T03:48:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.