From Classification Accuracy to Proper Scoring Rules: Elicitability of
Probabilistic Top List Predictions
- URL: http://arxiv.org/abs/2301.11797v1
- Date: Fri, 27 Jan 2023 15:55:01 GMT
- Title: From Classification Accuracy to Proper Scoring Rules: Elicitability of
Probabilistic Top List Predictions
- Authors: Johannes Resin
- Abstract summary: I propose a novel type of prediction in classification, which bridges the gap between single-class predictions and predictive distributions.
The proposed evaluation metrics are based on symmetric proper scoring rules and admit comparison of various types of predictions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the face of uncertainty, the need for probabilistic assessments has long
been recognized in the literature on forecasting. In classification, however,
comparative evaluation of classifiers often focuses on predictions specifying a
single class through the use of simple accuracy measures, which disregard any
probabilistic uncertainty quantification. I propose probabilistic top lists as
a novel type of prediction in classification, which bridges the gap between
single-class predictions and predictive distributions. The probabilistic top
list functional is elicitable through the use of strictly consistent evaluation
metrics. The proposed evaluation metrics are based on symmetric proper scoring
rules and admit comparison of various types of predictions ranging from
single-class point predictions to fully specified predictive distributions. The
Brier score yields a metric that is particularly well suited for this kind of
comparison.
Related papers
- Conformalized Interval Arithmetic with Symmetric Calibration [9.559062601251464]
We develop conformal prediction intervals for single target to the prediction interval for sum of multiple targets.
We show that our method outperforms existing conformalized approaches as well as non-conformal approaches.
arXiv Detail & Related papers (2024-08-20T15:27:18Z) - Trustworthy Classification through Rank-Based Conformal Prediction Sets [9.559062601251464]
We propose a novel conformal prediction method that employs a rank-based score function suitable for classification models.
Our approach constructs prediction sets that achieve the desired coverage rate while managing their size.
Our contributions include a novel conformal prediction method, theoretical analysis, and empirical evaluation.
arXiv Detail & Related papers (2024-07-05T10:43:41Z) - Class-Conditional Conformal Prediction with Many Classes [60.8189977620604]
We propose a method called clustered conformal prediction that clusters together classes having "similar" conformal scores.
We find that clustered conformal typically outperforms existing methods in terms of class-conditional coverage and set size metrics.
arXiv Detail & Related papers (2023-06-15T17:59:02Z) - Predictive Inference with Feature Conformal Prediction [80.77443423828315]
We propose feature conformal prediction, which extends the scope of conformal prediction to semantic feature spaces.
From a theoretical perspective, we demonstrate that feature conformal prediction provably outperforms regular conformal prediction under mild assumptions.
Our approach could be combined with not only vanilla conformal prediction, but also other adaptive conformal prediction methods.
arXiv Detail & Related papers (2022-10-01T02:57:37Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - Performance-Agnostic Fusion of Probabilistic Classifier Outputs [2.4206828137867107]
We propose a method for combining probabilistic outputs of classifiers to make a single consensus class prediction.
Our proposed method works well in situations where accuracy is the performance metric.
It does not output calibrated probabilities, so it is not suitable in situations where such probabilities are required for further processing.
arXiv Detail & Related papers (2020-09-01T16:53:29Z) - Cautious Active Clustering [79.23797234241471]
We consider the problem of classification of points sampled from an unknown probability measure on a Euclidean space.
Our approach is to consider the unknown probability measure as a convex combination of the conditional probabilities for each class.
arXiv Detail & Related papers (2020-08-03T23:47:31Z) - Combining Task Predictors via Enhancing Joint Predictability [53.46348489300652]
We present a new predictor combination algorithm that improves the target by i) measuring the relevance of references based on their capabilities in predicting the target, and ii) strengthening such estimated relevance.
Our algorithm jointly assesses the relevance of all references by adopting a Bayesian framework.
Based on experiments on seven real-world datasets from visual attribute ranking and multi-class classification scenarios, we demonstrate that our algorithm offers a significant performance gain and broadens the application range of existing predictor combination approaches.
arXiv Detail & Related papers (2020-07-15T21:58:39Z) - Classifier uncertainty: evidence, potential impact, and probabilistic
treatment [0.0]
We present an approach to quantify the uncertainty of classification performance metrics based on a probability model of the confusion matrix.
We show that uncertainties can be surprisingly large and limit performance evaluation.
arXiv Detail & Related papers (2020-06-19T12:49:19Z) - Training conformal predictors [0.0]
Efficiency criteria for conformal prediction, such as emphobserved fuzziness, are commonly used to emphevaluate the performance of given conformal predictors.
Here, we investigate whether it is possible to exploit such criteria to emphlearn classifiers.
arXiv Detail & Related papers (2020-05-14T14:47:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.