Neural Empirical Bayes: Source Distribution Estimation and its
Applications to Simulation-Based Inference
- URL: http://arxiv.org/abs/2011.05836v2
- Date: Fri, 26 Feb 2021 22:26:52 GMT
- Title: Neural Empirical Bayes: Source Distribution Estimation and its
Applications to Simulation-Based Inference
- Authors: Maxime Vandegar, Michael Kagan, Antoine Wehenkel, Gilles Louppe
- Abstract summary: We show that a neural empirical Bayes approach recovers ground truth source distributions.
We also show the applicability of Neural Empirical Bayes on an inverse problem from collider physics.
- Score: 9.877509217895263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We revisit empirical Bayes in the absence of a tractable likelihood function,
as is typical in scientific domains relying on computer simulations. We
investigate how the empirical Bayesian can make use of neural density
estimators first to use all noise-corrupted observations to estimate a prior or
source distribution over uncorrupted samples, and then to perform
single-observation posterior inference using the fitted source distribution. We
propose an approach based on the direct maximization of the log-marginal
likelihood of the observations, examining both biased and de-biased estimators,
and comparing to variational approaches. We find that, up to symmetries, a
neural empirical Bayes approach recovers ground truth source distributions.
With the learned source distribution in hand, we show the applicability to
likelihood-free inference and examine the quality of the resulting posterior
estimates. Finally, we demonstrate the applicability of Neural Empirical Bayes
on an inverse problem from collider physics.
Related papers
- A Likelihood Based Approach to Distribution Regression Using Conditional Deep Generative Models [6.647819824559201]
We study the large-sample properties of a likelihood-based approach for estimating conditional deep generative models.
Our results lead to the convergence rate of a sieve maximum likelihood estimator for estimating the conditional distribution.
arXiv Detail & Related papers (2024-10-02T20:46:21Z) - Sourcerer: Sample-based Maximum Entropy Source Distribution Estimation [5.673617376471343]
We propose an approach which targets the maximum entropy distribution, i.e., prioritizes retaining as much uncertainty as possible.
Our method is purely sample-based - leveraging the Sliced-Wasserstein distance to measure the discrepancy between the dataset and simulations.
To demonstrate the utility of our approach, we infer source distributions for parameters of the Hodgkin-Huxley model from experimental datasets with thousands of single-neuron measurements.
arXiv Detail & Related papers (2024-02-12T17:13:02Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Adversarial robustness of amortized Bayesian inference [3.308743964406687]
Amortized Bayesian inference is to initially invest computational cost in training an inference network on simulated data.
We show that almost unrecognizable, targeted perturbations of the observations can lead to drastic changes in the predicted posterior and highly unrealistic posterior predictive samples.
We propose a computationally efficient regularization scheme based on penalizing the Fisher information of the conditional density estimator.
arXiv Detail & Related papers (2023-05-24T10:18:45Z) - Posterior samples of source galaxies in strong gravitational lenses with
score-based priors [107.52670032376555]
We use a score-based model to encode the prior for the inference of undistorted images of background galaxies.
We show how the balance between the likelihood and the prior meet our expectations in an experiment with out-of-distribution data.
arXiv Detail & Related papers (2022-11-07T19:00:42Z) - Neural Importance Sampling for Rapid and Reliable Gravitational-Wave
Inference [59.040209568168436]
We first generate a rapid proposal for the Bayesian posterior using neural networks, and then attach importance weights based on the underlying likelihood and prior.
This provides (1) a corrected posterior free from network inaccuracies, (2) a performance diagnostic (the sample efficiency) for assessing the proposal and identifying failure cases, and (3) an unbiased estimate of the Bayesian evidence.
We carry out a large study analyzing 42 binary black hole mergers observed by LIGO and Virgo with the SEOBNRv4PHM and IMRPhenomHMXP waveform models.
arXiv Detail & Related papers (2022-10-11T18:00:02Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Convergence Rates of Empirical Bayes Posterior Distributions: A
Variational Perspective [20.51199643121034]
We study the convergence rates of empirical Bayes posterior distributions for nonparametric and high-dimensional inference.
We show that the empirical Bayes posterior distribution induced by the maximum marginal likelihood estimator can be regarded as a variational approximation to a hierarchical Bayes posterior distribution.
arXiv Detail & Related papers (2020-09-08T19:35:27Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.