Estimation and Applications of Quantiles in Deep Binary Classification
- URL: http://arxiv.org/abs/2102.06575v1
- Date: Tue, 9 Feb 2021 07:07:42 GMT
- Title: Estimation and Applications of Quantiles in Deep Binary Classification
- Authors: Anuj Tambwekar, Anirudh Maiya, Soma Dhavala, Snehanshu Saha
- Abstract summary: Quantile regression, based on check loss, is a widely used inferential paradigm in Statistics.
We consider the analogue of check loss in the binary classification setting.
We develop individualized confidence scores that can be used to decide whether a prediction is reliable.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantile regression, based on check loss, is a widely used inferential
paradigm in Econometrics and Statistics. The conditional quantiles provide a
robust alternative to classical conditional means, and also allow uncertainty
quantification of the predictions, while making very few distributional
assumptions. We consider the analogue of check loss in the binary
classification setting. We assume that the conditional quantiles are smooth
functions that can be learnt by Deep Neural Networks (DNNs). Subsequently, we
compute the Lipschitz constant of the proposed loss, and also show that its
curvature is bounded, under some regularity conditions. Consequently, recent
results on the error rates and DNN architecture complexity become directly
applicable.
We quantify the uncertainty of the class probabilities in terms of prediction
intervals, and develop individualized confidence scores that can be used to
decide whether a prediction is reliable or not at scoring time. By aggregating
the confidence scores at the dataset level, we provide two additional metrics,
model confidence, and retention rate, to complement the widely used classifier
summaries. We also the robustness of the proposed non-parametric binary
quantile classification framework are also studied, and we demonstrate how to
obtain several univariate summary statistics of the conditional distributions,
in particular conditional means, using smoothed conditional quantiles, allowing
the use of explanation techniques like Shapley to explain the mean predictions.
Finally, we demonstrate an efficient training regime for this loss based on
Stochastic Gradient Descent with Lipschitz Adaptive Learning Rates (LALR).
Related papers
- Semiparametric conformal prediction [79.6147286161434]
Risk-sensitive applications require well-calibrated prediction sets over multiple, potentially correlated target variables.
We treat the scores as random vectors and aim to construct the prediction set accounting for their joint correlation structure.
We report desired coverage and competitive efficiency on a range of real-world regression problems.
arXiv Detail & Related papers (2024-11-04T14:29:02Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - Conformal Prediction with Missing Values [19.18178194789968]
We first show that the marginal coverage guarantee of conformal prediction holds on imputed data for any missingness distribution.
We then show that a universally consistent quantile regression algorithm trained on the imputed data is Bayes optimal for the pinball risk.
arXiv Detail & Related papers (2023-06-05T09:28:03Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - Adaptive Conformal Prediction by Reweighting Nonconformity Score [0.0]
We use a Quantile Regression Forest (QRF) to learn the distribution of nonconformity scores and utilize the QRF's weights to assign more importance to samples with residuals similar to the test point.
Our approach enjoys an assumption-free finite sample marginal and training-conditional coverage, and under suitable assumptions, it also ensures conditional coverage.
arXiv Detail & Related papers (2023-03-22T16:42:19Z) - Approximate Conditional Coverage via Neural Model Approximations [0.030458514384586396]
We analyze a data-driven procedure for obtaining empirically reliable approximate conditional coverage.
We demonstrate the potential for substantial (and otherwise unknowable) under-coverage with split-conformal alternatives with marginal coverage guarantees.
arXiv Detail & Related papers (2022-05-28T02:59:05Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.