Low-Budget Simulation-Based Inference with Bayesian Neural Networks
- URL: http://arxiv.org/abs/2408.15136v1
- Date: Tue, 27 Aug 2024 15:19:07 GMT
- Title: Low-Budget Simulation-Based Inference with Bayesian Neural Networks
- Authors: Arnaud Delaunoy, Maxence de la Brassinne Bonardeaux, Siddharth Mishra-Sharma, Gilles Louppe,
- Abstract summary: We show that Bayesian neural networks produce informative and well-calibrated posterior estimates with only a few hundred simulations.
This opens up the possibility of performing reliable simulation-based inference using very expensive simulators.
- Score: 6.076337482187888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulation-based inference methods have been shown to be inaccurate in the data-poor regime, when training simulations are limited or expensive. Under these circumstances, the inference network is particularly prone to overfitting, and using it without accounting for the computational uncertainty arising from the lack of identifiability of the network weights can lead to unreliable results. To address this issue, we propose using Bayesian neural networks in low-budget simulation-based inference, thereby explicitly accounting for the computational uncertainty of the posterior approximation. We design a family of Bayesian neural network priors that are tailored for inference and show that they lead to well-calibrated posteriors on tested benchmarks, even when as few as $O(10)$ simulations are available. This opens up the possibility of performing reliable simulation-based inference using very expensive simulators, as we demonstrate on a problem from the field of cosmology where single simulations are computationally expensive. We show that Bayesian neural networks produce informative and well-calibrated posterior estimates with only a few hundred simulations.
Related papers
- Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - A variational neural Bayes framework for inference on intractable posterior distributions [1.0801976288811024]
Posterior distributions of model parameters are efficiently obtained by feeding observed data into a trained neural network.
We show theoretically that our posteriors converge to the true posteriors in Kullback-Leibler divergence.
arXiv Detail & Related papers (2024-04-16T20:40:15Z) - Amortized Bayesian Decision Making for simulation-based models [11.375835331641548]
We address the question of how to perform Bayesian decision making on simulators.
Our method trains a neural network on simulated data and can predict the expected cost.
We then apply the method to infer optimal actions in a real-world simulator in the medical neurosciences.
arXiv Detail & Related papers (2023-12-05T11:29:54Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Towards Reliable Simulation-Based Inference with Balanced Neural Ratio
Estimation [9.45752477068207]
Current simulation-based inference algorithms can produce posteriors that are overconfident, hence risking false inferences.
We introduce Balanced Neural Ratio Estimation (BNRE), a variation of the NRE algorithm designed to produce posterior approximations that tend to be more conservative.
We show that BNRE produces conservative posterior surrogates on all tested benchmarks and simulation budgets.
arXiv Detail & Related papers (2022-08-29T14:13:55Z) - Neural Posterior Estimation with Differentiable Simulators [58.720142291102135]
We present a new method to perform Neural Posterior Estimation (NPE) with a differentiable simulator.
We demonstrate how gradient information helps constrain the shape of the posterior and improves sample-efficiency.
arXiv Detail & Related papers (2022-07-12T16:08:04Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Variational methods for simulation-based inference [3.308743964406687]
Sequential Neural Variational Inference (SNVI) is an approach to perform Bayesian inference in models with intractable likelihoods.
SNVI combines likelihood-estimation with variational inference to achieve a scalable simulation-based inference approach.
arXiv Detail & Related papers (2022-03-08T16:06:37Z) - Truncated Marginal Neural Ratio Estimation [5.438798591410838]
We present a neural simulator-based inference algorithm which simultaneously offers simulation efficiency and fast empirical posterior testability.
Our approach is simulation efficient by simultaneously estimating low-dimensional marginal posteriors instead of the joint posterior.
By estimating a locally amortized posterior our algorithm enables efficient empirical tests of the robustness of the inference results.
arXiv Detail & Related papers (2021-07-02T18:00:03Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.