Combining Statistical Depth and Fermat Distance for Uncertainty Quantification
- URL: http://arxiv.org/abs/2404.08476v1
- Date: Fri, 12 Apr 2024 13:54:21 GMT
- Title: Combining Statistical Depth and Fermat Distance for Uncertainty Quantification
- Authors: Hai-Vy Nguyen, Fabrice Gamboa, Reda Chhaibi, Sixin Zhang, Serge Gratton, Thierry Giaccone,
- Abstract summary: We measure the Out-of-domain uncertainty in the prediction of Neural Networks using a statistical notion called Lens Depth'' (LD) combined with Fermat Distance.
The proposed method gives excellent qualitative result on toy datasets and can give competitive or better uncertainty estimation on standard deep learning datasets.
- Score: 3.3975558777609915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We measure the Out-of-domain uncertainty in the prediction of Neural Networks using a statistical notion called ``Lens Depth'' (LD) combined with Fermat Distance, which is able to capture precisely the ``depth'' of a point with respect to a distribution in feature space, without any assumption about the form of distribution. Our method has no trainable parameter. The method is applicable to any classification model as it is applied directly in feature space at test time and does not intervene in training process. As such, it does not impact the performance of the original model. The proposed method gives excellent qualitative result on toy datasets and can give competitive or better uncertainty estimation on standard deep learning datasets compared to strong baseline methods.
Related papers
- Probabilistic Contrastive Learning with Explicit Concentration on the Hypersphere [3.572499139455308]
This paper introduces a new perspective on incorporating uncertainty into contrastive learning by embedding representations within a spherical space.
We leverage the concentration parameter, kappa, as a direct, interpretable measure to quantify uncertainty explicitly.
arXiv Detail & Related papers (2024-05-26T07:08:13Z) - Uncertainty Measurement of Deep Learning System based on the Convex Hull of Training Sets [0.13265175299265505]
We propose To-hull Uncertainty and Closure Ratio, which measures an uncertainty of trained model based on the convex hull of training data.
It can observe the positional relation between the convex hull of the learned data and an unseen sample and infer how extrapolate the sample is from the convex hull.
arXiv Detail & Related papers (2024-05-25T06:25:24Z) - Generative Posterior Networks for Approximately Bayesian Epistemic
Uncertainty Estimation [11.66240919177989]
We propose a new method, Generative Posterior Networks (GPNs), that uses unlabeled data to estimate uncertainty in high-dimensional problems.
A GPN is a generative model that, given a prior distribution over functions, approximates the posterior distribution directly by regularizing the network towards samples from the prior.
arXiv Detail & Related papers (2023-12-29T00:43:41Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z) - Fine-grained Uncertainty Modeling in Neural Networks [0.0]
We present a novel method to detect out-of-distribution points in a Neural Network.
Our method corrects overconfident NN decisions, detects outlier points and learns to say I don't know'' when uncertain about a critical point between the top two predictions.
As a positive side effect, our method helps to prevent adversarial attacks without requiring any additional training.
arXiv Detail & Related papers (2020-02-11T05:06:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.