Conformal Risk Control for Ordinal Classification
- URL: http://arxiv.org/abs/2405.00417v1
- Date: Wed, 1 May 2024 09:55:31 GMT
- Title: Conformal Risk Control for Ordinal Classification
- Authors: Yunpeng Xu, Wenge Guo, Zhi Wei,
- Abstract summary: We seek to control the conformal risk in expectation for ordinal classification tasks, which have broad applications to many real problems.
We proposed two types of loss functions specially designed for ordinal classification tasks, and developed corresponding algorithms to determine the prediction set for each case.
We demonstrated the effectiveness of our proposed methods, and analyzed the difference between the two types of risks on three different datasets.
- Score: 2.0189665663352936
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As a natural extension to the standard conformal prediction method, several conformal risk control methods have been recently developed and applied to various learning problems. In this work, we seek to control the conformal risk in expectation for ordinal classification tasks, which have broad applications to many real problems. For this purpose, we firstly formulated the ordinal classification task in the conformal risk control framework, and provided theoretic risk bounds of the risk control method. Then we proposed two types of loss functions specially designed for ordinal classification tasks, and developed corresponding algorithms to determine the prediction set for each case to control their risks at a desired level. We demonstrated the effectiveness of our proposed methods, and analyzed the difference between the two types of risks on three different datasets, including a simulated dataset, the UTKFace dataset and the diabetic retinopathy detection dataset.
Related papers
- Two-stage Conformal Risk Control with Application to Ranked Retrieval [1.8481458455172357]
Two-stage ranked retrieval is a significant challenge for machine learning systems.
We propose an integrated approach to control the risk of each stage by jointly identifying thresholds for both stages.
Our algorithm further optimize for a weighted combination of prediction set sizes across all feasible thresholds, resulting in more effective prediction sets.
arXiv Detail & Related papers (2024-04-27T03:37:12Z) - Data-Adaptive Tradeoffs among Multiple Risks in Distribution-Free Prediction [55.77015419028725]
We develop methods that permit valid control of risk when threshold and tradeoff parameters are chosen adaptively.
Our methodology supports monotone and nearly-monotone risks, but otherwise makes no distributional assumptions.
arXiv Detail & Related papers (2024-03-28T17:28:06Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Quantifying Uncertainty in Deep Learning Classification with Noise in
Discrete Inputs for Risk-Based Decision Making [1.529943343419486]
We propose a mathematical framework to quantify prediction uncertainty for Deep Neural Network (DNN) models.
The prediction uncertainty arises from errors in predictors that follow some known finite discrete distribution.
Our proposed framework can support risk-based decision making in applications when discrete errors in predictors are present.
arXiv Detail & Related papers (2023-10-09T19:26:24Z) - Anomaly detection with semi-supervised classification based on risk
estimators [4.519754139322585]
We propose two novel classification-based anomaly detection methods.
Firstly, we introduce a semi-supervised shallow anomaly detection method based on an unbiased risk estimator.
Secondly, we present a semi-supervised deep anomaly detection method utilizing a nonnegative (biased) risk estimator.
arXiv Detail & Related papers (2023-09-01T10:30:48Z) - Deep Learning for Systemic Risk Measures [3.274367403737527]
The aim of this paper is to study a new methodological framework for systemic risk measures.
Under this new framework, systemic risk measures can be interpreted as the minimal amount of cash that secures the aggregated system.
Deep learning is increasingly receiving attention in financial modelings and risk management.
arXiv Detail & Related papers (2022-07-02T05:01:19Z) - Mitigating multiple descents: A model-agnostic framework for risk
monotonization [84.6382406922369]
We develop a general framework for risk monotonization based on cross-validation.
We propose two data-driven methodologies, namely zero- and one-step, that are akin to bagging and boosting.
arXiv Detail & Related papers (2022-05-25T17:41:40Z) - Risk Consistent Multi-Class Learning from Label Proportions [64.0125322353281]
This study addresses a multiclass learning from label proportions (MCLLP) setting in which training instances are provided in bags.
Most existing MCLLP methods impose bag-wise constraints on the prediction of instances or assign them pseudo-labels.
A risk-consistent method is proposed for instance classification using the empirical risk minimization framework.
arXiv Detail & Related papers (2022-03-24T03:49:04Z) - Self-Certifying Classification by Linearized Deep Assignment [65.0100925582087]
We propose a novel class of deep predictors for classifying metric data on graphs within PAC-Bayes risk certification paradigm.
Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables learning posterior distributions on the hypothesis space.
arXiv Detail & Related papers (2022-01-26T19:59:14Z) - Learn then Test: Calibrating Predictive Algorithms to Achieve Risk
Control [67.52000805944924]
Learn then Test (LTT) is a framework for calibrating machine learning models.
Our main insight is to reframe the risk-control problem as multiple hypothesis testing.
We use our framework to provide new calibration methods for several core machine learning tasks with detailed worked examples in computer vision.
arXiv Detail & Related papers (2021-10-03T17:42:03Z) - Feedback Effects in Repeat-Use Criminal Risk Assessments [0.0]
We show that risk can propagate over sequential decisions in ways that are not captured by one-shot tests.
Risk assessment tools operate in a highly complex and path-dependent process, fraught with historical inequity.
arXiv Detail & Related papers (2020-11-28T06:40:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.