Reliable Probability Intervals For Classification Using Inductive Venn
Predictors Based on Distance Learning
- URL: http://arxiv.org/abs/2110.03127v1
- Date: Thu, 7 Oct 2021 00:51:43 GMT
- Title: Reliable Probability Intervals For Classification Using Inductive Venn
Predictors Based on Distance Learning
- Authors: Dimitrios Boursinos and Xenofon Koutsoukos
- Abstract summary: We use the Inductive Venn Predictors framework for computing probability intervals regarding the correctness of each prediction in real-time.
We propose based on distance metric learning to compute informative probability intervals in applications involving high-dimensional inputs.
- Score: 2.66512000865131
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks are frequently used by autonomous systems for their
ability to learn complex, non-linear data patterns and make accurate
predictions in dynamic environments. However, their use as black boxes
introduces risks as the confidence in each prediction is unknown. Different
frameworks have been proposed to compute accurate confidence measures along
with the predictions but at the same time introduce a number of limitations
like execution time overhead or inability to be used with high-dimensional
data. In this paper, we use the Inductive Venn Predictors framework for
computing probability intervals regarding the correctness of each prediction in
real-time. We propose taxonomies based on distance metric learning to compute
informative probability intervals in applications involving high-dimensional
inputs. Empirical evaluation on image classification and botnet attacks
detection in Internet-of-Things (IoT) applications demonstrates improved
accuracy and calibration. The proposed method is computationally efficient, and
therefore, can be used in real-time.
Related papers
- Out of Distribution Detection via Domain-Informed Gaussian Process State
Space Models [22.24457254575906]
In order for robots to safely navigate in unseen scenarios, it is important to accurately detect out-of-training-distribution (OoD) situations online.
We propose a novel approach to embed existing domain knowledge in the kernel and (ii) an OoD online runtime monitor, based on receding-horizon predictions.
arXiv Detail & Related papers (2023-09-13T01:02:42Z) - Uncertainty Quantification in Deep Neural Networks through Statistical
Inference on Latent Space [0.0]
We develop an algorithm that exploits the latent-space representation of data points fed into the network to assess the accuracy of their prediction.
We show on a synthetic dataset that commonly used methods are mostly overconfident.
In contrast, our method can detect such out-of-distribution data points as inaccurately predicted, thus aiding in the automatic detection of outliers.
arXiv Detail & Related papers (2023-05-18T09:52:06Z) - Prediction-Powered Inference [68.97619568620709]
Prediction-powered inference is a framework for performing valid statistical inference when an experimental dataset is supplemented with predictions from a machine-learning system.
The framework yields simple algorithms for computing provably valid confidence intervals for quantities such as means, quantiles, and linear and logistic regression coefficients.
Prediction-powered inference could enable researchers to draw valid and more data-efficient conclusions using machine learning.
arXiv Detail & Related papers (2023-01-23T18:59:28Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Improving Prediction Confidence in Learning-Enabled Autonomous Systems [2.66512000865131]
We utilize a feedback loop between learning-enabled components used for classification and the sensors of an autonomous system in order to improve the confidence of the predictions.
We design a classifier using Inductive Conformal Prediction (ICP) based on a triplet network architecture in order to learn representations that can be used to quantify the similarity between test and training examples.
A feedback loop that queries the sensors for a new input is used to further refine the predictions and increase the classification accuracy.
arXiv Detail & Related papers (2021-10-07T00:40:34Z) - PDC-Net+: Enhanced Probabilistic Dense Correspondence Network [161.76275845530964]
Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, capable of estimating accurate dense correspondences.
We develop an architecture and an enhanced training strategy tailored for robust and generalizable uncertainty prediction.
Our approach obtains state-of-the-art results on multiple challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-09-28T17:56:41Z) - Quantifying Uncertainty in Deep Spatiotemporal Forecasting [67.77102283276409]
We describe two types of forecasting problems: regular grid-based and graph-based.
We analyze UQ methods from both the Bayesian and the frequentist point view, casting in a unified framework via statistical decision theory.
Through extensive experiments on real-world road network traffic, epidemics, and air quality forecasting tasks, we reveal the statistical computational trade-offs for different UQ methods.
arXiv Detail & Related papers (2021-05-25T14:35:46Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.