Neural Empirical Bayes: Source Distribution Estimation and its
Applications to Simulation-Based Inference
- URL: http://arxiv.org/abs/2011.05836v2
- Date: Fri, 26 Feb 2021 22:26:52 GMT
- Title: Neural Empirical Bayes: Source Distribution Estimation and its
Applications to Simulation-Based Inference
- Authors: Maxime Vandegar, Michael Kagan, Antoine Wehenkel, Gilles Louppe
- Abstract summary: We show that a neural empirical Bayes approach recovers ground truth source distributions.
We also show the applicability of Neural Empirical Bayes on an inverse problem from collider physics.
- Score: 9.877509217895263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We revisit empirical Bayes in the absence of a tractable likelihood function,
as is typical in scientific domains relying on computer simulations. We
investigate how the empirical Bayesian can make use of neural density
estimators first to use all noise-corrupted observations to estimate a prior or
source distribution over uncorrupted samples, and then to perform
single-observation posterior inference using the fitted source distribution. We
propose an approach based on the direct maximization of the log-marginal
likelihood of the observations, examining both biased and de-biased estimators,
and comparing to variational approaches. We find that, up to symmetries, a
neural empirical Bayes approach recovers ground truth source distributions.
With the learned source distribution in hand, we show the applicability to
likelihood-free inference and examine the quality of the resulting posterior
estimates. Finally, we demonstrate the applicability of Neural Empirical Bayes
on an inverse problem from collider physics.
Related papers
- In-Context Parametric Inference: Point or Distribution Estimators? [66.22308335324239]
We show that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems.
Our experiments indicate that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems.
arXiv Detail & Related papers (2025-02-17T10:00:24Z) - Generative Modeling with Bayesian Sample Inference [50.07758840675341]
We derive a novel generative model from the simple act of Gaussian posterior inference.
Treating the generated sample as an unknown variable to infer lets us formulate the sampling process in the language of Bayesian probability.
Our model uses a sequence of prediction and posterior update steps to narrow down the unknown sample from a broad initial belief.
arXiv Detail & Related papers (2025-02-11T14:27:10Z) - Diffusion Models for Inverse Problems in the Exponential Family [45.560812800359685]
We extend diffusion models to handle inverse problems where the observations follow a distribution from the exponential family.
We introduce the evidence trick, a method that provides a tractable approximation to the likelihood score.
We demonstrate the real-world impact of our methodology by showing that it performs competitively with the current state-of-the-art in predicting malaria prevalence estimates in Sub-Saharan Africa.
arXiv Detail & Related papers (2025-02-09T18:56:57Z) - A Likelihood Based Approach to Distribution Regression Using Conditional Deep Generative Models [6.647819824559201]
We study the large-sample properties of a likelihood-based approach for estimating conditional deep generative models.
Our results lead to the convergence rate of a sieve maximum likelihood estimator for estimating the conditional distribution.
arXiv Detail & Related papers (2024-10-02T20:46:21Z) - Sourcerer: Sample-based Maximum Entropy Source Distribution Estimation [5.673617376471343]
We propose an approach which targets the maximum entropy distribution, i.e., prioritizes retaining as much uncertainty as possible.
Our method is purely sample-based - leveraging the Sliced-Wasserstein distance to measure the discrepancy between the dataset and simulations.
To demonstrate the utility of our approach, we infer source distributions for parameters of the Hodgkin-Huxley model from experimental datasets with thousands of single-neuron measurements.
arXiv Detail & Related papers (2024-02-12T17:13:02Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Convergence Rates of Empirical Bayes Posterior Distributions: A
Variational Perspective [20.51199643121034]
We study the convergence rates of empirical Bayes posterior distributions for nonparametric and high-dimensional inference.
We show that the empirical Bayes posterior distribution induced by the maximum marginal likelihood estimator can be regarded as a variational approximation to a hierarchical Bayes posterior distribution.
arXiv Detail & Related papers (2020-09-08T19:35:27Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.