Uncertainty of Feed Forward Neural Networks Recognizing Quantum
Contextuality
- URL: http://arxiv.org/abs/2212.13564v1
- Date: Tue, 27 Dec 2022 17:33:46 GMT
- Title: Uncertainty of Feed Forward Neural Networks Recognizing Quantum
Contextuality
- Authors: Jan Wasilewski, Tomasz Paterek, Karol Horodecki
- Abstract summary: A powerful technique for estimating both the accuracy and the uncertainty is provided by Bayesian neural networks (BNNs)
We show how BNNs can highlight their ability of reliable uncertainty estimation even after training with biased data sets.
- Score: 2.5665227681407243
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The usual figure of merit characterizing the performance of neural networks
applied to problems in the quantum domain is their accuracy, being the
probability of a correct answer on a previously unseen input. Here we append
this parameter with the uncertainty of the prediction, characterizing the
degree of confidence in the answer. A powerful technique for estimating both
the accuracy and the uncertainty is provided by Bayesian neural networks
(BNNs). We first give simple illustrative examples of advantages brought
forward by BNNs, out of which we wish to highlight their ability of reliable
uncertainty estimation even after training with biased data sets. Then we apply
BNNs to the problem of recognition of quantum contextuality which shows that
the uncertainty itself is an independent parameter identifying the chance of
misclassification of contextuality.
Related papers
- Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Uncertainty Quantification in Multivariable Regression for Material Property Prediction with Bayesian Neural Networks [37.69303106863453]
We introduce an approach for uncertainty quantification (UQ) within physics-informed BNNs.
We present case studies for predicting the creep rupture life of steel alloys.
The most promising framework for creep life prediction is BNNs based on Markov Chain Monte Carlo approximation of the posterior distribution of network parameters.
arXiv Detail & Related papers (2023-11-04T19:40:16Z) - Uncertainty in Natural Language Processing: Sources, Quantification, and
Applications [56.130945359053776]
We provide a comprehensive review of uncertainty-relevant works in the NLP field.
We first categorize the sources of uncertainty in natural language into three types, including input, system, and output.
We discuss the challenges of uncertainty estimation in NLP and discuss potential future directions.
arXiv Detail & Related papers (2023-06-05T06:46:53Z) - Uncertainty Propagation in Node Classification [9.03984964980373]
We focus on measuring uncertainty of graph neural networks (GNNs) for the task of node classification.
We propose a Bayesian uncertainty propagation (BUP) method, which embeds GNNs in a Bayesian modeling framework.
We present an uncertainty oriented loss for node classification that allows the GNNs to clearly integrate predictive uncertainty in learning procedure.
arXiv Detail & Related papers (2023-04-03T12:18:23Z) - Looking at the posterior: accuracy and uncertainty of neural-network
predictions [0.0]
We show that prediction accuracy depends on both epistemic and aleatoric uncertainty.
We introduce a novel acquisition function that outperforms common uncertainty-based methods.
arXiv Detail & Related papers (2022-11-26T16:13:32Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Probabilistic Neighbourhood Component Analysis: Sample Efficient
Uncertainty Estimation in Deep Learning [25.8227937350516]
We show that uncertainty estimation capability of state-of-the-art BNNs and Deep Ensemble models degrades significantly when the amount of training data is small.
We propose a probabilistic generalization of the popular sample-efficient non-parametric kNN approach.
Our approach enables deep kNN to accurately quantify underlying uncertainties in its prediction.
arXiv Detail & Related papers (2020-07-18T21:36:31Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
Neural Networks [19.648814035399013]
Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks.
We propose a new framework that allows to convert any arbitrary explanation method for neural networks into an explanation method for Bayesian neural networks.
We demonstrate the effectiveness and usefulness of our approach extensively in various experiments.
arXiv Detail & Related papers (2020-06-16T08:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.