Towards Reliable Simulation-Based Inference with Balanced Neural Ratio
Estimation
- URL: http://arxiv.org/abs/2208.13624v1
- Date: Mon, 29 Aug 2022 14:13:55 GMT
- Title: Towards Reliable Simulation-Based Inference with Balanced Neural Ratio
Estimation
- Authors: Arnaud Delaunoy, Joeri Hermans, Fran\c{c}ois Rozet, Antoine Wehenkel,
Gilles Louppe
- Abstract summary: Current simulation-based inference algorithms can produce posteriors that are overconfident, hence risking false inferences.
We introduce Balanced Neural Ratio Estimation (BNRE), a variation of the NRE algorithm designed to produce posterior approximations that tend to be more conservative.
We show that BNRE produces conservative posterior surrogates on all tested benchmarks and simulation budgets.
- Score: 9.45752477068207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern approaches for simulation-based inference rely upon deep learning
surrogates to enable approximate inference with computer simulators. In
practice, the estimated posteriors' computational faithfulness is, however,
rarely guaranteed. For example, Hermans et al. (2021) show that current
simulation-based inference algorithms can produce posteriors that are
overconfident, hence risking false inferences. In this work, we introduce
Balanced Neural Ratio Estimation (BNRE), a variation of the NRE algorithm
designed to produce posterior approximations that tend to be more conservative,
hence improving their reliability, while sharing the same Bayes optimal
solution. We achieve this by enforcing a balancing condition that increases the
quantified uncertainty in small simulation budget regimes while still
converging to the exact posterior as the budget increases. We provide
theoretical arguments showing that BNRE tends to produce posterior surrogates
that are more conservative than NRE's. We evaluate BNRE on a wide variety of
tasks and show that it produces conservative posterior surrogates on all tested
benchmarks and simulation budgets. Finally, we emphasize that BNRE is
straightforward to implement over NRE and does not introduce any computational
overhead.
Related papers
- Low-Budget Simulation-Based Inference with Bayesian Neural Networks [6.076337482187888]
We show that Bayesian neural networks produce informative and well-calibrated posterior estimates with only a few hundred simulations.
This opens up the possibility of performing reliable simulation-based inference using very expensive simulators.
arXiv Detail & Related papers (2024-08-27T15:19:07Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Adversarial robustness of amortized Bayesian inference [3.308743964406687]
Amortized Bayesian inference is to initially invest computational cost in training an inference network on simulated data.
We show that almost unrecognizable, targeted perturbations of the observations can lead to drastic changes in the predicted posterior and highly unrealistic posterior predictive samples.
We propose a computationally efficient regularization scheme based on penalizing the Fisher information of the conditional density estimator.
arXiv Detail & Related papers (2023-05-24T10:18:45Z) - Balancing Simulation-based Inference for Conservative Posteriors [5.06518742691077]
We introduce a balanced version of both neural posterior estimation and contrastive neural ratio estimation.
We show that the balanced versions tend to produce conservative posterior approximations on a wide variety of benchmarks.
arXiv Detail & Related papers (2023-04-21T14:26:16Z) - Improved Regret for Efficient Online Reinforcement Learning with Linear
Function Approximation [69.0695698566235]
We study reinforcement learning with linear function approximation and adversarially changing cost functions.
We present a computationally efficient policy optimization algorithm for the challenging general setting of unknown dynamics and bandit feedback.
arXiv Detail & Related papers (2023-01-30T17:26:39Z) - Bayesian Recurrent Units and the Forward-Backward Algorithm [91.39701446828144]
Using Bayes's theorem, we derive a unit-wise recurrence as well as a backward recursion similar to the forward-backward algorithm.
The resulting Bayesian recurrent units can be integrated as recurrent neural networks within deep learning frameworks.
Experiments on speech recognition indicate that adding the derived units at the end of state-of-the-art recurrent architectures can improve the performance at a very low cost in terms of trainable parameters.
arXiv Detail & Related papers (2022-07-21T14:00:52Z) - Neural Posterior Estimation with Differentiable Simulators [58.720142291102135]
We present a new method to perform Neural Posterior Estimation (NPE) with a differentiable simulator.
We demonstrate how gradient information helps constrain the shape of the posterior and improves sample-efficiency.
arXiv Detail & Related papers (2022-07-12T16:08:04Z) - Variational methods for simulation-based inference [3.308743964406687]
Sequential Neural Variational Inference (SNVI) is an approach to perform Bayesian inference in models with intractable likelihoods.
SNVI combines likelihood-estimation with variational inference to achieve a scalable simulation-based inference approach.
arXiv Detail & Related papers (2022-03-08T16:06:37Z) - Truncated Marginal Neural Ratio Estimation [5.438798591410838]
We present a neural simulator-based inference algorithm which simultaneously offers simulation efficiency and fast empirical posterior testability.
Our approach is simulation efficient by simultaneously estimating low-dimensional marginal posteriors instead of the joint posterior.
By estimating a locally amortized posterior our algorithm enables efficient empirical tests of the robustness of the inference results.
arXiv Detail & Related papers (2021-07-02T18:00:03Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.