Uncertainty Propagation through Trained Deep Neural Networks Using
Factor Graphs
- URL: http://arxiv.org/abs/2312.05946v1
- Date: Sun, 10 Dec 2023 17:26:27 GMT
- Title: Uncertainty Propagation through Trained Deep Neural Networks Using
Factor Graphs
- Authors: Angel Daruna, Yunye Gong, Abhinav Rajvanshi, Han-Pang Chiu, Yi Yao
- Abstract summary: Uncertainty propagation seeks to estimate aleatoric uncertainty by propagating input uncertainties to network predictions.
Motivated by the complex information flows within deep neural networks, we developed a novel approach by posing uncertainty propagation as a non-linear optimization problem using factor graphs.
- Score: 4.704825771757308
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predictive uncertainty estimation remains a challenging problem precluding
the use of deep neural networks as subsystems within safety-critical
applications. Aleatoric uncertainty is a component of predictive uncertainty
that cannot be reduced through model improvements. Uncertainty propagation
seeks to estimate aleatoric uncertainty by propagating input uncertainties to
network predictions. Existing uncertainty propagation techniques use one-way
information flows, propagating uncertainties layer-by-layer or across the
entire neural network while relying either on sampling or analytical techniques
for propagation. Motivated by the complex information flows within deep neural
networks (e.g. skip connections), we developed and evaluated a novel approach
by posing uncertainty propagation as a non-linear optimization problem using
factor graphs. We observed statistically significant improvements in
performance over prior work when using factor graphs across most of our
experiments that included three datasets and two neural network architectures.
Our implementation balances the benefits of sampling and analytical propagation
techniques, which we believe, is a key factor in achieving performance
improvements.
Related papers
- An Analytic Solution to Covariance Propagation in Neural Networks [10.013553984400488]
This paper presents a sample-free moment propagation technique to accurately characterize the input-output distributions of neural networks.
A key enabler of our technique is an analytic solution for the covariance of random variables passed through nonlinear activation functions.
The wide applicability and merits of the proposed technique are shown in experiments analyzing the input-output distributions of trained neural networks and training Bayesian neural networks.
arXiv Detail & Related papers (2024-03-24T14:08:24Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Cycle Consistency-based Uncertainty Quantification of Neural Networks in
Inverse Imaging Problems [10.992084413881592]
Uncertainty estimation is critical for numerous applications of deep neural networks.
We show an uncertainty quantification approach for deep neural networks used in inverse problems based on cycle consistency.
arXiv Detail & Related papers (2023-05-22T09:23:18Z) - Variational Voxel Pseudo Image Tracking [127.46919555100543]
Uncertainty estimation is an important task for critical problems, such as robotics and autonomous driving.
We propose a Variational Neural Network-based version of a Voxel Pseudo Image Tracking (VPIT) method for 3D Single Object Tracking.
arXiv Detail & Related papers (2023-02-12T13:34:50Z) - Learning Uncertainty with Artificial Neural Networks for Improved
Predictive Process Monitoring [0.114219428942199]
We distinguish two types of learnable uncertainty: model uncertainty due to a lack of training data and noise-induced observational uncertainty.
Our contribution is to apply these uncertainty concepts to predictive process monitoring tasks to train uncertainty-based models to predict the remaining time and outcomes.
arXiv Detail & Related papers (2022-06-13T17:05:27Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Deep Bayesian Gaussian Processes for Uncertainty Estimation in
Electronic Health Records [30.65770563934045]
We merge features of the deep Bayesian learning framework with deep kernel learning to leverage the strengths of both methods for more comprehensive uncertainty estimation.
We show that our method is less susceptible to making overconfident predictions, especially for the minority class in imbalanced datasets.
arXiv Detail & Related papers (2020-03-23T10:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.