Uncertainty in Neural Relational Inference Trajectory Reconstruction
- URL: http://arxiv.org/abs/2006.13666v2
- Date: Thu, 25 Jun 2020 10:02:34 GMT
- Title: Uncertainty in Neural Relational Inference Trajectory Reconstruction
- Authors: Vasileios Karavias, Ben Day, Pietro Li\`o
- Abstract summary: We extend the Neural Inference model to output both a mean standard deviation for each component of a space vector and an appropriate loss function.
We show that the physical meaning of the variables is important when considering the uncertainty and demonstrate the existence of pathological local minima.
- Score: 3.4806267677524896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks used for multi-interaction trajectory reconstruction lack the
ability to estimate the uncertainty in their outputs, which would be useful to
better analyse and understand the systems they model. In this paper we extend
the Factorised Neural Relational Inference model to output both a mean and a
standard deviation for each component of the phase space vector, which together
with an appropriate loss function, can account for uncertainty. A variety of
loss functions are investigated including ideas from convexification and a
Bayesian treatment of the problem. We show that the physical meaning of the
variables is important when considering the uncertainty and demonstrate the
existence of pathological local minima that are difficult to avoid during
training.
Related papers
- Conditional Temporal Neural Processes with Covariance Loss [19.805881561847492]
We introduce a novel loss function, Covariance Loss, which is conceptually equivalent to conditional neural processes.
We conduct extensive sets of experiments on real-world datasets with state-of-the-art models.
arXiv Detail & Related papers (2025-04-01T13:51:44Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - On the ISS Property of the Gradient Flow for Single Hidden-Layer Neural
Networks with Linear Activations [0.0]
We investigate the effects of overfitting on the robustness of gradient-descent training when subject to uncertainty on the gradient estimation.
We show that the general overparametrized formulation introduces a set of spurious equilibria which lay outside the set where the loss function is minimized.
arXiv Detail & Related papers (2023-05-17T02:26:34Z) - Neural State-Space Models: Empirical Evaluation of Uncertainty
Quantification [0.0]
This paper presents preliminary results on uncertainty quantification for system identification with neural state-space models.
We frame the learning problem in a Bayesian probabilistic setting and obtain posterior distributions for the neural network's weights and outputs.
Based on the posterior, we construct credible intervals on the outputs and define a surprise index which can effectively diagnose usage of the model in a potentially dangerous out-of-distribution regime.
arXiv Detail & Related papers (2023-04-13T08:57:33Z) - Learning the Distribution of Errors in Stereo Matching for Joint
Disparity and Uncertainty Estimation [8.057006406834466]
We present a new loss function for joint disparity and uncertainty estimation in deep stereo matching.
We experimentally assess the effectiveness of our approach and observe significant improvements in both disparity and uncertainty prediction on large datasets.
arXiv Detail & Related papers (2023-03-31T21:58:19Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - BayesIMP: Uncertainty Quantification for Causal Data Fusion [52.184885680729224]
We study the causal data fusion problem, where datasets pertaining to multiple causal graphs are combined to estimate the average treatment effect of a target variable.
We introduce a framework which combines ideas from probabilistic integration and kernel mean embeddings to represent interventional distributions in the reproducing kernel Hilbert space.
arXiv Detail & Related papers (2021-06-07T10:14:18Z) - Aleatoric uncertainty for Errors-in-Variables models in deep regression [0.48733623015338234]
We show how the concept of Errors-in-Variables can be used in Bayesian deep regression.
We discuss the approach along various simulated and real examples.
arXiv Detail & Related papers (2021-05-19T12:37:02Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - Bayesian Uncertainty Estimation of Learned Variational MRI
Reconstruction [63.202627467245584]
We introduce a Bayesian variational framework to quantify the model-immanent (epistemic) uncertainty.
We demonstrate that our approach yields competitive results for undersampled MRI reconstruction.
arXiv Detail & Related papers (2021-02-12T18:08:14Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.