Constraining cosmological parameters from N-body simulations with
Variational Bayesian Neural Networks
- URL: http://arxiv.org/abs/2301.03991v1
- Date: Mon, 9 Jan 2023 16:07:48 GMT
- Title: Constraining cosmological parameters from N-body simulations with
Variational Bayesian Neural Networks
- Authors: H\'ector J. Hort\'ua, Luz \'Angela Garc\'ia and Leonardo Casta\~neda C
- Abstract summary: Multiplicative normalizing flows (MNFs) are a family of approximate posteriors for the parameters of BNNs.
We have compared MNFs with respect to the standard BNNs, and the flipout estimator.
MNFs provide more realistic predictive distribution closer to the true posterior mitigating the bias introduced by the variational approximation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Methods based on Deep Learning have recently been applied on astrophysical
parameter recovery thanks to their ability to capture information from complex
data. One of these methods is the approximate Bayesian Neural Networks (BNNs)
which have demonstrated to yield consistent posterior distribution into the
parameter space, helpful for uncertainty quantification. However, as any modern
neural networks, they tend to produce overly confident uncertainty estimates
and can introduce bias when BNNs are applied to data. In this work, we
implement multiplicative normalizing flows (MNFs), a family of approximate
posteriors for the parameters of BNNs with the purpose of enhancing the
flexibility of the variational posterior distribution, to extract $\Omega_m$,
$h$, and $\sigma_8$ from the QUIJOTE simulations. We have compared this method
with respect to the standard BNNs, and the flipout estimator. We found that
MNFs combined with BNNs outperform the other models obtaining predictive
performance with almost one order of magnitude larger that standard BNNs,
$\sigma_8$ extracted with high accuracy ($r^2=0.99$), and precise uncertainty
estimates. The latter implies that MNFs provide more realistic predictive
distribution closer to the true posterior mitigating the bias introduced by the
variational approximation and allowing to work with well-calibrated networks.
Related papers
- Inference in Partially Linear Models under Dependent Data with Deep Neural Networks [0.0]
I consider inference in a partially linear regression model under stationary $beta$-mixing data after first stage deep neural network (DNN) estimation.
By avoiding sample splitting, I address one of the key challenges in applying machine learning techniques to econometric models with dependent data.
arXiv Detail & Related papers (2024-10-29T22:29:31Z) - Uncertainty Quantification in Multivariable Regression for Material Property Prediction with Bayesian Neural Networks [37.69303106863453]
We introduce an approach for uncertainty quantification (UQ) within physics-informed BNNs.
We present case studies for predicting the creep rupture life of steel alloys.
The most promising framework for creep life prediction is BNNs based on Markov Chain Monte Carlo approximation of the posterior distribution of network parameters.
arXiv Detail & Related papers (2023-11-04T19:40:16Z) - Single-shot Bayesian approximation for neural networks [0.0]
Deep neural networks (NNs) are known for their high-prediction performances.
NNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty.
We present a single-shot MC dropout approximation that preserves the advantages of BNNs while being as fast as NNs.
arXiv Detail & Related papers (2023-08-24T13:40:36Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Sparsifying Bayesian neural networks with latent binary variables and
normalizing flows [10.865434331546126]
We will consider two extensions to the latent binary Bayesian neural networks (LBBNN) method.
Firstly, by using the local reparametrization trick (LRT) to sample the hidden units directly, we get a more computationally efficient algorithm.
More importantly, by using normalizing flows on the variational posterior distribution of the LBBNN parameters, the network learns a more flexible variational posterior distribution than the mean field Gaussian.
arXiv Detail & Related papers (2023-05-05T09:40:28Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Exploring the Uncertainty Properties of Neural Networks' Implicit Priors
in the Infinite-Width Limit [47.324627920761685]
We use recent theoretical advances that characterize the function-space prior to an ensemble of infinitely-wide NNs as a Gaussian process.
This gives us a better understanding of the implicit prior NNs place on function space.
We also examine the calibration of previous approaches to classification with the NNGP.
arXiv Detail & Related papers (2020-10-14T18:41:54Z) - An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their
Asymptotic Overconfidence [65.24701908364383]
A Bayesian treatment can mitigate overconfidence in ReLU nets around the training data.
But far away from them, ReLU neural networks (BNNs) can still underestimate uncertainty and thus be overconfident.
We show that it can be applied emphpost-hoc to any pre-trained ReLU BNN at a low cost.
arXiv Detail & Related papers (2020-10-06T13:32:18Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Exact posterior distributions of wide Bayesian neural networks [51.20413322972014]
We show that the exact BNN posterior converges (weakly) to the one induced by the GP limit of the prior.
For empirical validation, we show how to generate exact samples from a finite BNN on a small dataset via rejection sampling.
arXiv Detail & Related papers (2020-06-18T13:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.