Regression with Cost-based Rejection
- URL: http://arxiv.org/abs/2311.04550v1
- Date: Wed, 8 Nov 2023 09:33:21 GMT
- Title: Regression with Cost-based Rejection
- Authors: Xin Cheng and Yuzhou Cao and Haobo Wang and Hongxin Wei and Bo An and
Lei Feng
- Abstract summary: We investigate a novel regression problem where the model can reject to make predictions on some examples given certain rejection costs.
We derive the Bayes optimal solution, which shows that the optimal model should reject to make predictions on the examples whose variance is larger than the rejection cost.
- Score: 30.43900105405108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning with rejection is an important framework that can refrain from
making predictions to avoid critical mispredictions by balancing between
prediction and rejection. Previous studies on cost-based rejection only focused
on the classification setting, which cannot handle the continuous and infinite
target space in the regression setting. In this paper, we investigate a novel
regression problem called regression with cost-based rejection, where the model
can reject to make predictions on some examples given certain rejection costs.
To solve this problem, we first formulate the expected risk for this problem
and then derive the Bayes optimal solution, which shows that the optimal model
should reject to make predictions on the examples whose variance is larger than
the rejection cost when the mean squared error is used as the evaluation
metric. Furthermore, we propose to train the model by a surrogate loss function
that considers rejection as binary classification and we provide conditions for
the model consistency, which implies that the Bayes optimal solution can be
recovered by our proposed surrogate loss. Extensive experiments demonstrate the
effectiveness of our proposed method.
Related papers
- Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization [60.176008034221404]
Direct Preference Optimization (DPO) and its variants are increasingly used for aligning language models with human preferences.
Prior work has observed that the likelihood of preferred responses often decreases during training.
We demonstrate that likelihood displacement can be catastrophic, shifting probability mass from preferred responses to responses with an opposite meaning.
arXiv Detail & Related papers (2024-10-11T14:22:44Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Conformalized Selective Regression [2.3964255330849356]
We propose a novel approach to selective regression by leveraging conformal prediction.
We show how our proposed approach, conformalized selective regression, demonstrates an advantage over multiple state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-26T04:43:50Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - When No-Rejection Learning is Consistent for Regression with Rejection [11.244583592648443]
We study a no-reject learning strategy that uses all the data to learn the prediction.
This paper investigates a no-reject learning strategy that uses all the data to learn the prediction.
arXiv Detail & Related papers (2023-07-06T11:43:22Z) - Selective Regression Under Fairness Criteria [30.672082160544996]
In some cases, the performance of minority group can decrease while we reduce the coverage.
We show that such an unwanted behavior can be avoided if we can construct features satisfying the sufficiency criterion.
arXiv Detail & Related papers (2021-10-28T19:05:12Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Regression with reject option and application to kNN [0.0]
We refer to this framework as regression with reject option as an extension of classification with reject option.
We provide a semi-supervised estimation procedure of the optimal rule involving two datasets.
The resulting predictor with reject option is shown to be almost as good as the optimal predictor with reject option both in terms of risk and rejection rate.
arXiv Detail & Related papers (2020-06-30T08:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.