Uncertainty Aware Neural Network from Similarity and Sensitivity
- URL: http://arxiv.org/abs/2304.14925v1
- Date: Thu, 27 Apr 2023 02:05:31 GMT
- Title: Uncertainty Aware Neural Network from Similarity and Sensitivity
- Authors: H M Dipu Kabir, Subrota Kumar Mondal, Sadia Khanam, Abbas Khosravi,
Shafin Rahman, Mohammad Reza Chalak Qazani, Roohallah Alizadehsani, Houshyar
Asadi, Shady Mohamed, Saeid Nahavandi, U Rajendra Acharya
- Abstract summary: We present a neural network training method that considers similar samples with sensitivity awareness.
We construct initial uncertainty bounds (UB) by considering the distribution of sensitivity aware similar samples.
As following all the steps for finding UB for each sample requires a lot of computation and memory access, we train a UB computation NN.
- Score: 19.688986566942
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Researchers have proposed several approaches for neural network (NN) based
uncertainty quantification (UQ). However, most of the approaches are developed
considering strong assumptions. Uncertainty quantification algorithms often
perform poorly in an input domain and the reason for poor performance remains
unknown. Therefore, we present a neural network training method that considers
similar samples with sensitivity awareness in this paper. In the proposed NN
training method for UQ, first, we train a shallow NN for the point prediction.
Then, we compute the absolute differences between prediction and targets and
train another NN for predicting those absolute differences or absolute errors.
Domains with high average absolute errors represent a high uncertainty. In the
next step, we select each sample in the training set one by one and compute
both prediction and error sensitivities. Then we select similar samples with
sensitivity consideration and save indexes of similar samples. The ranges of an
input parameter become narrower when the output is highly sensitive to that
parameter. After that, we construct initial uncertainty bounds (UB) by
considering the distribution of sensitivity aware similar samples. Prediction
intervals (PIs) from initial uncertainty bounds are larger and cover more
samples than required. Therefore, we train bound correction NN. As following
all the steps for finding UB for each sample requires a lot of computation and
memory access, we train a UB computation NN. The UB computation NN takes an
input sample and provides an uncertainty bound. The UB computation NN is the
final product of the proposed approach. Scripts of the proposed method are
available in the following GitHub repository: github.com/dipuk0506/UQ
Related papers
- Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Single-shot Bayesian approximation for neural networks [0.0]
Deep neural networks (NNs) are known for their high-prediction performances.
NNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty.
We present a single-shot MC dropout approximation that preserves the advantages of BNNs while being as fast as NNs.
arXiv Detail & Related papers (2023-08-24T13:40:36Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - Improved uncertainty quantification for neural networks with Bayesian
last layer [0.0]
Uncertainty quantification is an important task in machine learning.
We present a reformulation of the log-marginal likelihood of a NN with BLL which allows for efficient training using backpropagation.
arXiv Detail & Related papers (2023-02-21T20:23:56Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Censored Quantile Regression Neural Networks [24.118509578363593]
This paper considers doing quantile regression on censored data using neural networks (NNs)
We show how an algorithm popular in linear models can be applied to NNs.
Our major contribution is a novel algorithm that simultaneously optimises a grid of quantiles output by a single NN.
arXiv Detail & Related papers (2022-05-26T17:10:28Z) - A Simple Approach to Improve Single-Model Deep Uncertainty via
Distance-Awareness [33.09831377640498]
We study approaches to improve uncertainty property of a single network, based on a single, deterministic representation.
We propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs.
On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection.
arXiv Detail & Related papers (2022-05-01T05:46:13Z) - On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks [91.3755431537592]
We study how random pruning of the weights affects a neural network's neural kernel (NTK)
In particular, this work establishes an equivalence of the NTKs between a fully-connected neural network and its randomly pruned version.
arXiv Detail & Related papers (2022-03-27T15:22:19Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Simple and Principled Uncertainty Estimation with Deterministic Deep
Learning via Distance Awareness [24.473250414880454]
We study principled approaches to high-quality uncertainty estimation that require only a single deep neural network (DNN)
By formalizing the uncertainty quantification as a minimax learning problem, we first identify input distance awareness, i.e., the model's ability to quantify the distance of a testing example from the training data in the input space.
We then propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs.
arXiv Detail & Related papers (2020-06-17T19:18:22Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.