Unifying Lower Bounds on Prediction Dimension of Consistent Convex
Surrogates
- URL: http://arxiv.org/abs/2102.08218v1
- Date: Tue, 16 Feb 2021 15:29:05 GMT
- Title: Unifying Lower Bounds on Prediction Dimension of Consistent Convex
Surrogates
- Authors: Jessie Finocchiaro and Rafael Frongillo and Bo Waggoner
- Abstract summary: Given a prediction task, understanding when one can and cannot design a consistent convex surrogate loss is an important area of machine learning research.
We unify these settings using tools from property elicitation, and give a general lower bound on prediction dimension.
Our lower bound tightens existing results in the case of discrete predictions, showing that previous calibration-based bounds can largely be recovered via property elicitation.
For continuous estimation, our lower bound resolves on open problem on estimating measures of risk and uncertainty.
- Score: 12.751555473216683
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Given a prediction task, understanding when one can and cannot design a
consistent convex surrogate loss, particularly a low-dimensional one, is an
important and active area of machine learning research. The prediction task may
be given as a target loss, as in classification and structured prediction, or
simply as a (conditional) statistic of the data, as in risk measure estimation.
These two scenarios typically involve different techniques for designing and
analyzing surrogate losses. We unify these settings using tools from property
elicitation, and give a general lower bound on prediction dimension. Our lower
bound tightens existing results in the case of discrete predictions, showing
that previous calibration-based bounds can largely be recovered via property
elicitation. For continuous estimation, our lower bound resolves on open
problem on estimating measures of risk and uncertainty.
Related papers
- Conformalized Multimodal Uncertainty Regression and Reasoning [0.9205582989348333]
This paper introduces a lightweight uncertainty estimator capable of predicting multimodal (disjoint) uncertainty bounds.
We specifically discuss its application for visual odometry (VO), where environmental features such as flying domain symmetries can result in multimodal uncertainties.
arXiv Detail & Related papers (2023-09-20T02:40:59Z) - On the Expected Size of Conformal Prediction Sets [24.161372736642157]
We theoretically quantify the expected size of the prediction sets under the split conformal prediction framework.
As this precise formulation cannot usually be calculated directly, we derive point estimates and high-probability bounds interval.
We corroborate the efficacy of our results with experiments on real-world datasets for both regression and classification problems.
arXiv Detail & Related papers (2023-06-12T17:22:57Z) - Conformal Prediction with Missing Values [19.18178194789968]
We first show that the marginal coverage guarantee of conformal prediction holds on imputed data for any missingness distribution.
We then show that a universally consistent quantile regression algorithm trained on the imputed data is Bayes optimal for the pinball risk.
arXiv Detail & Related papers (2023-06-05T09:28:03Z) - Conformal Loss-Controlling Prediction [23.218535051437588]
Conformal prediction is a learning framework controlling prediction coverage of prediction sets.
This work proposes a learning framework named conformal loss-controlling prediction, which extends conformal prediction to the situation where the value of a loss function needs to be controlled.
arXiv Detail & Related papers (2023-01-06T08:58:49Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Distribution-Free, Risk-Controlling Prediction Sets [112.9186453405701]
We show how to generate set-valued predictions from a black-box predictor that control the expected loss on future test points at a user-specified level.
Our approach provides explicit finite-sample guarantees for any dataset by using a holdout set to calibrate the size of the prediction sets.
arXiv Detail & Related papers (2021-01-07T18:59:33Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z) - Probabilistic Deep Learning for Instance Segmentation [9.62543698736491]
We propose a generic method to obtain model-inherent uncertainty estimates within proposal-free instance segmentation models.
We evaluate our method on the BBBC010 C. elegans dataset, where it yields competitive performance.
arXiv Detail & Related papers (2020-08-24T19:51:48Z) - Learning Output Embeddings in Structured Prediction [73.99064151691597]
A powerful and flexible approach to structured prediction consists in embedding the structured objects to be predicted into a feature space of possibly infinite dimension.
A prediction in the original space is computed by solving a pre-image problem.
In this work, we propose to jointly learn a finite approximation of the output embedding and the regression function into the new feature space.
arXiv Detail & Related papers (2020-07-29T09:32:53Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z) - A General Framework for Consistent Structured Prediction with Implicit
Loss Embeddings [113.15416137912399]
We propose and analyze a novel theoretical and algorithmic framework for structured prediction.
We study a large class of loss functions that implicitly defines a suitable geometry on the problem.
When dealing with output spaces with infinite cardinality, a suitable implicit formulation of the estimator is shown to be crucial.
arXiv Detail & Related papers (2020-02-13T10:30:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.