Balancing Simulation-based Inference for Conservative Posteriors
- URL: http://arxiv.org/abs/2304.10978v1
- Date: Fri, 21 Apr 2023 14:26:16 GMT
- Title: Balancing Simulation-based Inference for Conservative Posteriors
- Authors: Arnaud Delaunoy, Benjamin Kurt Miller, Patrick Forr\'e, Christoph
Weniger, Gilles Louppe
- Abstract summary: We introduce a balanced version of both neural posterior estimation and contrastive neural ratio estimation.
We show that the balanced versions tend to produce conservative posterior approximations on a wide variety of benchmarks.
- Score: 5.06518742691077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conservative inference is a major concern in simulation-based inference. It
has been shown that commonly used algorithms can produce overconfident
posterior approximations. Balancing has empirically proven to be an effective
way to mitigate this issue. However, its application remains limited to neural
ratio estimation. In this work, we extend balancing to any algorithm that
provides a posterior density. In particular, we introduce a balanced version of
both neural posterior estimation and contrastive neural ratio estimation. We
show empirically that the balanced versions tend to produce conservative
posterior approximations on a wide variety of benchmarks. In addition, we
provide an alternative interpretation of the balancing condition in terms of
the $\chi^2$ divergence.
Related papers
- Epistemic Uncertainty and Observation Noise with the Neural Tangent Kernel [12.464924018243988]
Recent work has shown that training wide neural networks with gradient descent is formally equivalent to computing the mean of the posterior distribution in a Gaussian Process.
We show how to deal with non-zero aleatoric noise and derive an estimator for the posterior covariance.
arXiv Detail & Related papers (2024-09-06T00:34:44Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Convergence analysis of equilibrium methods for inverse problems [0.0]
We provide stability and convergence results for the class of equilibrium methods.
We derive convergence rates and stability estimates in the symmetric Bregman distance.
We show that the convergence analysis leads to the design of a new type of loss function.
arXiv Detail & Related papers (2023-06-02T10:22:33Z) - Robust computation of optimal transport by $\beta$-potential
regularization [79.24513412588745]
Optimal transport (OT) has become a widely used tool in the machine learning field to measure the discrepancy between probability distributions.
We propose regularizing OT with the beta-potential term associated with the so-called $beta$-divergence.
We experimentally demonstrate that the transport matrix computed with our algorithm helps estimate a probability distribution robustly even in the presence of outliers.
arXiv Detail & Related papers (2022-12-26T18:37:28Z) - Towards Reliable Simulation-Based Inference with Balanced Neural Ratio
Estimation [9.45752477068207]
Current simulation-based inference algorithms can produce posteriors that are overconfident, hence risking false inferences.
We introduce Balanced Neural Ratio Estimation (BNRE), a variation of the NRE algorithm designed to produce posterior approximations that tend to be more conservative.
We show that BNRE produces conservative posterior surrogates on all tested benchmarks and simulation budgets.
arXiv Detail & Related papers (2022-08-29T14:13:55Z) - On Convergence of Training Loss Without Reaching Stationary Points [62.41370821014218]
We show that Neural Network weight variables do not converge to stationary points where the gradient the loss function vanishes.
We propose a new perspective based on ergodic theory dynamical systems.
arXiv Detail & Related papers (2021-10-12T18:12:23Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Online nonparametric regression with Sobolev kernels [99.12817345416846]
We derive the regret upper bounds on the classes of Sobolev spaces $W_pbeta(mathcalX)$, $pgeq 2, beta>fracdp$.
The upper bounds are supported by the minimax regret analysis, which reveals that in the cases $beta> fracd2$ or $p=infty$ these rates are (essentially) optimal.
arXiv Detail & Related papers (2021-02-06T15:05:14Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.