Sequential Likelihood-Free Inference with Implicit Surrogate Proposal
- URL: http://arxiv.org/abs/2010.07604v2
- Date: Mon, 15 Feb 2021 16:55:23 GMT
- Title: Sequential Likelihood-Free Inference with Implicit Surrogate Proposal
- Authors: Dongjun Kim, Kyungwoo Song, YoonYeong Kim, Yongjin Shin, Wanmo Kang,
Il-Chul Moon
- Abstract summary: This paper introduces Implicit Surrogate Proposal (ISP) to generate a cumulated dataset with further sample efficiency.
ISP constructs the cumulative dataset in the most diverse way by drawing i.i.d samples via a feed-forward fashion.
We demonstrate that ISP outperforms the baseline inference algorithms on simulations with multi-modal posteriors.
- Score: 24.20924279100816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian inference without the access of likelihood, or likelihood-free
inference, has been a key research topic in simulations, to yield a more
realistic generation result. Recent likelihood-free inference updates an
approximate posterior sequentially with the dataset of the cumulative
simulation input-output pairs over inference rounds. Therefore, the dataset is
gathered through the iterative simulations with sampled inputs from a proposal
distribution by MCMC, which becomes the key of inference quality in this
sequential framework. This paper introduces a new proposal modeling, named as
Implicit Surrogate Proposal (ISP), to generate a cumulated dataset with further
sample efficiency. ISP constructs the cumulative dataset in the most diverse
way by drawing i.i.d samples via a feed-forward fashion, so the posterior
inference does not suffer from the disadvantages of MCMC caused by its
non-i.i.d nature, such as auto-correlation and slow mixing. We analyze the
convergence property of ISP in both theoretical and empirical aspects to
guarantee that ISP provides an asymptotically exact sampler. We demonstrate
that ISP outperforms the baseline inference algorithms on simulations with
multi-modal posteriors.
Related papers
- All-in-one simulation-based inference [19.41881319338419]
We present a new amortized inference method -- the Simformer -- which overcomes current limitations.
The Simformer outperforms current state-of-the-art amortized inference approaches on benchmark tasks.
It can be applied to models with function-valued parameters, it can handle inference scenarios with missing or unstructured data, and it can sample arbitrary conditionals of the joint distribution of parameters and data.
arXiv Detail & Related papers (2024-04-15T10:12:33Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Fast Shapley Value Estimation: A Unified Approach [71.92014859992263]
We propose a straightforward and efficient Shapley estimator, SimSHAP, by eliminating redundant techniques.
In our analysis of existing approaches, we observe that estimators can be unified as a linear transformation of randomly summed values from feature subsets.
Our experiments validate the effectiveness of our SimSHAP, which significantly accelerates the computation of accurate Shapley values.
arXiv Detail & Related papers (2023-11-02T06:09:24Z) - Amortizing intractable inference in large language models [56.92471123778389]
We use amortized Bayesian inference to sample from intractable posterior distributions.
We empirically demonstrate that this distribution-matching paradigm of LLM fine-tuning can serve as an effective alternative to maximum-likelihood training.
As an important application, we interpret chain-of-thought reasoning as a latent variable modeling problem.
arXiv Detail & Related papers (2023-10-06T16:36:08Z) - Differentially Private Federated Clustering over Non-IID Data [59.611244450530315]
clustering clusters (FedC) problem aims to accurately partition unlabeled data samples distributed over massive clients into finite clients under the orchestration of a server.
We propose a novel FedC algorithm using differential privacy convergence technique, referred to as DP-Fed, in which partial participation and multiple clients are also considered.
Various attributes of the proposed DP-Fed are obtained through theoretical analyses of privacy protection, especially for the case of non-identically and independently distributed (non-i.i.d.) data.
arXiv Detail & Related papers (2023-01-03T05:38:43Z) - Variational methods for simulation-based inference [3.308743964406687]
Sequential Neural Variational Inference (SNVI) is an approach to perform Bayesian inference in models with intractable likelihoods.
SNVI combines likelihood-estimation with variational inference to achieve a scalable simulation-based inference approach.
arXiv Detail & Related papers (2022-03-08T16:06:37Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Posterior-Aided Regularization for Likelihood-Free Inference [23.708122045184698]
Posterior-Aided Regularization (PAR) is applicable to learning the density estimator, regardless of the model structure.
We provide a unified estimation method of PAR to estimate both reverse KL term and mutual information term with a single neural network.
arXiv Detail & Related papers (2021-02-15T16:59:30Z) - Accelerating Metropolis-Hastings with Lightweight Inference Compilation [1.2633299843878945]
Inference Compilation (LIC) implements amortized inference within an open-universe probabilistic programming language.
LIC forgoes importance sampling of linear execution traces in favor of operating directly on Bayesian networks.
Experimental results show LIC can produce proposers which have less parameters, greater robustness to nuisance random variables, and improved posterior sampling.
arXiv Detail & Related papers (2020-10-23T02:05:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.