Adversarial Bayesian Simulation
- URL: http://arxiv.org/abs/2208.12113v2
- Date: Thu, 20 Jul 2023 19:59:57 GMT
- Title: Adversarial Bayesian Simulation
- Authors: Yuexi Wang, Veronika Ro\v{c}kov\'a
- Abstract summary: We bridge approximate Bayesian computation (ABC) with deep neural implicit samplers based on adversarial networks (GANs) and adversarial variational Bayes.
We develop a Bayesian GAN that directly targets the posterior by solving an adversarial optimization problem.
We show that the typical total variation distance between the true and approximate posteriors converges to zero for certain neural network generators and discriminators.
- Score: 0.9137554315375922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the absence of explicit or tractable likelihoods, Bayesians often resort
to approximate Bayesian computation (ABC) for inference. Our work bridges ABC
with deep neural implicit samplers based on generative adversarial networks
(GANs) and adversarial variational Bayes. Both ABC and GANs compare aspects of
observed and fake data to simulate from posteriors and likelihoods,
respectively. We develop a Bayesian GAN (B-GAN) sampler that directly targets
the posterior by solving an adversarial optimization problem. B-GAN is driven
by a deterministic mapping learned on the ABC reference by conditional GANs.
Once the mapping has been trained, iid posterior samples are obtained by
filtering noise at a negligible additional cost. We propose two post-processing
local refinements using (1) data-driven proposals with importance reweighting,
and (2) variational Bayes. We support our findings with frequentist-Bayesian
results, showing that the typical total variation distance between the true and
approximate posteriors converges to zero for certain neural network generators
and discriminators. Our findings on simulated data show highly competitive
performance relative to some of the most recent likelihood-free posterior
simulators.
Related papers
- Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - A new perspective on Bayesian Operational Modal Analysis [0.0]
In this article, a new perspective on Bayesian OMA is proposed: a Bayesian subspace identification (SSI) algorithm.
Two case studies are explored: the first is benchmark study using data from a simulated, multi degree-of-freedom, linear system.
It is observed that the posterior distributions with mean values coinciding with the natural frequencies exhibit lower variance than values situated away from the natural frequencies.
arXiv Detail & Related papers (2024-08-16T11:11:56Z) - Reducing the cost of posterior sampling in linear inverse problems via task-dependent score learning [5.340736751238338]
We show that the evaluation of the forward mapping can be entirely bypassed during posterior sample generation.
We prove that this observation generalizes to the framework of infinite-dimensional diffusion models introduced recently.
arXiv Detail & Related papers (2024-05-24T15:33:27Z) - Hessian-Free Laplace in Bayesian Deep Learning [44.16006844888796]
Hessian-free Laplace (HFL) approximation uses curvature of both the log posterior and network prediction to estimate its variance.
We show that, under standard assumptions of LA in Bayesian deep learning, HFL targets the same variance as LA, and can be efficiently amortized in a pre-trained network.
arXiv Detail & Related papers (2024-03-15T20:47:39Z) - Favour: FAst Variance Operator for Uncertainty Rating [0.034530027457862]
Bayesian Neural Networks (BNN) have emerged as a crucial approach for interpreting ML predictions.
By sampling from the posterior distribution, data scientists may estimate the uncertainty of an inference.
Previous work proposed propagating the first and second moments of the posterior directly through the network.
This method is even slower than sampling, so the propagated variance needs to be approximated.
Our contribution is a more principled variance propagation framework.
arXiv Detail & Related papers (2023-11-21T22:53:20Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - Learning to solve Bayesian inverse problems: An amortized variational inference approach using Gaussian and Flow guides [0.0]
We develop a methodology that enables real-time inference by learning the Bayesian inverse map, i.e., the map from data to posteriors.
Our approach provides the posterior distribution for a given observation just at the cost of a forward pass of the neural network.
arXiv Detail & Related papers (2023-05-31T16:25:07Z) - Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - Sample-Efficient Optimisation with Probabilistic Transformer Surrogates [66.98962321504085]
This paper investigates the feasibility of employing state-of-the-art probabilistic transformers in Bayesian optimisation.
We observe two drawbacks stemming from their training procedure and loss definition, hindering their direct deployment as proxies in black-box optimisation.
We introduce two components: 1) a BO-tailored training prior supporting non-uniformly distributed points, and 2) a novel approximate posterior regulariser trading-off accuracy and input sensitivity to filter favourable stationary points for improved predictive performance.
arXiv Detail & Related papers (2022-05-27T11:13:17Z) - Transformers Can Do Bayesian Inference [56.99390658880008]
We present Prior-Data Fitted Networks (PFNs)
PFNs leverage in-context learning in large-scale machine learning techniques to approximate a large set of posteriors.
We demonstrate that PFNs can near-perfectly mimic Gaussian processes and also enable efficient Bayesian inference for intractable problems.
arXiv Detail & Related papers (2021-12-20T13:07:39Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.