Conditionally Calibrated Predictive Distributions by
Probability-Probability Map: Application to Galaxy Redshift Estimation and
Probabilistic Forecasting
- URL: http://arxiv.org/abs/2205.14568v4
- Date: Mon, 17 Jul 2023 16:58:54 GMT
- Title: Conditionally Calibrated Predictive Distributions by
Probability-Probability Map: Application to Galaxy Redshift Estimation and
Probabilistic Forecasting
- Authors: Biprateep Dey and David Zhao and Jeffrey A. Newman and Brett H.
Andrews and Rafael Izbicki and Ann B. Lee
- Abstract summary: Uncertainty is crucial for assessing the predictive ability of AI algorithms.
We propose textttCal-PIT, a method that addresses both PD diagnostics and recalibration.
We benchmark our corrected prediction bands against oracle bands and state-of-the-art predictive inference algorithms.
- Score: 4.186140302617659
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainty quantification is crucial for assessing the predictive ability of
AI algorithms. Much research has been devoted to describing the predictive
distribution (PD) $F(y|\mathbf{x})$ of a target variable $y \in \mathbb{R}$
given complex input features $\mathbf{x} \in \mathcal{X}$. However,
off-the-shelf PDs (from, e.g., normalizing flows and Bayesian neural networks)
often lack conditional calibration with the probability of occurrence of an
event given input $\mathbf{x}$ being significantly different from the predicted
probability. Current calibration methods do not fully assess and enforce
conditionally calibrated PDs. Here we propose \texttt{Cal-PIT}, a method that
addresses both PD diagnostics and recalibration by learning a single
probability-probability map from calibration data. The key idea is to regress
probability integral transform scores against $\mathbf{x}$. The estimated
regression provides interpretable diagnostics of conditional coverage across
the feature space. The same regression function morphs the misspecified PD to a
re-calibrated PD for all $\mathbf{x}$. We benchmark our corrected prediction
bands (a by-product of corrected PDs) against oracle bands and state-of-the-art
predictive inference algorithms for synthetic data. We also provide results for
two applications: (i) probabilistic nowcasting given sequences of satellite
images, and (ii) conditional density estimation of galaxy distances given
imaging data (so-called photometric redshift estimation). Our code is available
as a Python package https://github.com/lee-group-cmu/Cal-PIT .
Related papers
- Semiparametric conformal prediction [79.6147286161434]
We construct a conformal prediction set accounting for the joint correlation structure of the vector-valued non-conformity scores.
We flexibly estimate the joint cumulative distribution function (CDF) of the scores.
Our method yields desired coverage and competitive efficiency on a range of real-world regression problems.
arXiv Detail & Related papers (2024-11-04T14:29:02Z) - Variance Reduction for the Independent Metropolis Sampler [11.074080383657453]
We prove that if $pi$ is close enough under KL divergence to another density $q$, an independent sampler that obtains samples from $pi$ achieves smaller variance than i.i.d. sampling from $pi$.
We propose an adaptive independent Metropolis algorithm that adapts the proposal density such that its KL divergence with the target is being reduced.
arXiv Detail & Related papers (2024-06-25T16:38:53Z) - Orthogonal Causal Calibration [55.28164682911196]
We develop general algorithms for reducing the task of causal calibration to that of calibrating a standard (non-causal) predictive model.
Our results are exceedingly general, showing that essentially any existing calibration algorithm can be used in causal settings.
arXiv Detail & Related papers (2024-06-04T03:35:25Z) - Nearest Neighbor Sampling for Covariate Shift Adaptation [7.940293148084844]
We propose a new covariate shift adaptation method which avoids estimating the weights.
The basic idea is to directly work on unlabeled target data, labeled according to the $k$-nearest neighbors in the source dataset.
Our experiments show that it achieves drastic reduction in the running time with remarkable accuracy.
arXiv Detail & Related papers (2023-12-15T17:28:09Z) - SPD-DDPM: Denoising Diffusion Probabilistic Models in the Symmetric
Positive Definite Space [47.65912121120524]
We propose a novel generative model, termed SPD-DDPM, to handle large-scale data.
Our model is able to estimate $p(X)$ unconditionally and flexibly without giving $y$.
Experiment results on toy data and real taxi data demonstrate that our models effectively fit the data distribution both unconditionally and unconditionally.
arXiv Detail & Related papers (2023-12-13T15:08:54Z) - Proximity-Informed Calibration for Deep Neural Networks [49.330703634912915]
ProCal is a plug-and-play algorithm with a theoretical guarantee to adjust sample confidence based on proximity.
We show that ProCal is effective in addressing proximity bias and improving calibration on balanced, long-tail, and distribution-shift settings.
arXiv Detail & Related papers (2023-06-07T16:40:51Z) - Will My Robot Achieve My Goals? Predicting the Probability that an MDP Policy Reaches a User-Specified Behavior Target [56.99669411766284]
As an autonomous system performs a task, it should maintain a calibrated estimate of the probability that it will achieve the user's goal.
This paper considers settings where the user's goal is specified as a target interval for a real-valued performance summary.
We compute the probability estimates by inverting conformal prediction.
arXiv Detail & Related papers (2022-11-29T18:41:20Z) - A Consistent and Differentiable Lp Canonical Calibration Error Estimator [21.67616079217758]
Deep neural networks are poorly calibrated and tend to output overconfident predictions.
We propose a low-bias, trainable calibration error estimator based on Dirichlet kernel density estimates.
Our method has a natural choice of kernel, and can be used to generate consistent estimates of other quantities.
arXiv Detail & Related papers (2022-10-13T15:11:11Z) - CPnP: Consistent Pose Estimator for Perspective-n-Point Problem with Bias Elimination [10.099598169569383]
We propose a consistent solver, named emphC, with bias elimination.
We analyze subtract bias based on which a closed-form least-squares solution is obtained.
Tests on both synthetic data and real images show that our proposed estimator is superior to some well-known ones for images with dense visual features.
arXiv Detail & Related papers (2022-09-13T09:00:58Z) - Statistical Hypothesis Testing Based on Machine Learning: Large
Deviations Analysis [15.605887551756933]
We study the performance -- and specifically the rate at which the error probability converges to zero -- of Machine Learning (ML) classification techniques.
We provide the mathematical conditions for a ML to exhibit error probabilities that vanish exponentially, say $sim expleft(-n,I + o(n) right)
In other words, the classification error probability convergence to zero and its rate can be computed on a portion of the dataset available for training.
arXiv Detail & Related papers (2022-07-22T08:30:10Z) - (Nearly) Optimal Private Linear Regression via Adaptive Clipping [22.639650869444395]
We study the problem of differentially private linear regression where each data point is sampled from a fixed sub-Gaussian style distribution.
We propose and analyze a one-pass mini-batch gradient descent method (DP-AMBSSGD) where points in each iteration are sampled without replacement.
arXiv Detail & Related papers (2022-07-11T08:04:46Z) - T-Cal: An optimal test for the calibration of predictive models [49.11538724574202]
We consider detecting mis-calibration of predictive models using a finite validation dataset as a hypothesis testing problem.
detecting mis-calibration is only possible when the conditional probabilities of the classes are sufficiently smooth functions of the predictions.
We propose T-Cal, a minimax test for calibration based on a de-biased plug-in estimator of the $ell$-Expected Error (ECE)
arXiv Detail & Related papers (2022-03-03T16:58:54Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - PDC-Net+: Enhanced Probabilistic Dense Correspondence Network [161.76275845530964]
Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, capable of estimating accurate dense correspondences.
We develop an architecture and an enhanced training strategy tailored for robust and generalizable uncertainty prediction.
Our approach obtains state-of-the-art results on multiple challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-09-28T17:56:41Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - SLOE: A Faster Method for Statistical Inference in High-Dimensional
Logistic Regression [68.66245730450915]
We develop an improved method for debiasing predictions and estimating frequentist uncertainty for practical datasets.
Our main contribution is SLOE, an estimator of the signal strength with convergence guarantees that reduces the computation time of estimation and inference by orders of magnitude.
arXiv Detail & Related papers (2021-03-23T17:48:56Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Uncertainty Inspired RGB-D Saliency Detection [70.50583438784571]
We propose the first framework to employ uncertainty for RGB-D saliency detection by learning from the data labeling process.
Inspired by the saliency data labeling process, we propose a generative architecture to achieve probabilistic RGB-D saliency detection.
Results on six challenging RGB-D benchmark datasets show our approach's superior performance in learning the distribution of saliency maps.
arXiv Detail & Related papers (2020-09-07T13:01:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.