On a Variational Approximation based Empirical Likelihood ABC Method
- URL: http://arxiv.org/abs/2011.07721v1
- Date: Thu, 12 Nov 2020 21:24:26 GMT
- Title: On a Variational Approximation based Empirical Likelihood ABC Method
- Authors: Sanjay Chaudhuri and Subhroshekhar Ghosh and David J. Nott and Kim Cuc
Pham
- Abstract summary: We propose an easy-to-use empirical likelihood ABC method in this article.
We show that the target log-posterior can be approximated as a sum of an expected joint log-likelihood and the differential entropy of the data generating density.
- Score: 1.5293427903448025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many scientifically well-motivated statistical models in natural,
engineering, and environmental sciences are specified through a generative
process. However, in some cases, it may not be possible to write down the
likelihood for these models analytically. Approximate Bayesian computation
(ABC) methods allow Bayesian inference in such situations. The procedures are
nonetheless typically computationally intensive. Recently, computationally
attractive empirical likelihood-based ABC methods have been suggested in the
literature. All of these methods rely on the availability of several suitable
analytically tractable estimating equations, and this is sometimes problematic.
We propose an easy-to-use empirical likelihood ABC method in this article.
First, by using a variational approximation argument as a motivation, we show
that the target log-posterior can be approximated as a sum of an expected joint
log-likelihood and the differential entropy of the data generating density. The
expected log-likelihood is then estimated by an empirical likelihood where the
only inputs required are a choice of summary statistic, it's observed value,
and the ability to simulate the chosen summary statistics for any parameter
value under the model. The differential entropy is estimated from the simulated
summaries using traditional methods. Posterior consistency is established for
the method, and we discuss the bounds for the required number of simulated
summaries in detail. The performance of the proposed method is explored in
various examples.
Related papers
- Asymptotically Optimal Change Detection for Unnormalized Pre- and Post-Change Distributions [65.38208224389027]
This paper addresses the problem of detecting changes when only unnormalized pre- and post-change distributions are accessible.
Our approach is based on the estimation of the Cumulative Sum statistics, which is known to produce optimal performance.
arXiv Detail & Related papers (2024-10-18T17:13:29Z) - Transformer-based Parameter Estimation in Statistics [0.0]
We propose a transformer-based approach to parameter estimation.
It does not even require knowing the probability density function, which is needed by numerical methods.
It is shown that our approach achieves similar or better accuracy as measured by mean-square-errors.
arXiv Detail & Related papers (2024-02-28T04:30:41Z) - Generalizing Backpropagation for Gradient-Based Interpretability [103.2998254573497]
We show that the gradient of a model is a special case of a more general formulation using semirings.
This observation allows us to generalize the backpropagation algorithm to efficiently compute other interpretable statistics.
arXiv Detail & Related papers (2023-07-06T15:19:53Z) - Statistical Efficiency of Score Matching: The View from Isoperimetry [96.65637602827942]
We show a tight connection between statistical efficiency of score matching and the isoperimetric properties of the distribution being estimated.
We formalize these results both in the sample regime and in the finite regime.
arXiv Detail & Related papers (2022-10-03T06:09:01Z) - A Statistical Decision-Theoretical Perspective on the Two-Stage Approach
to Parameter Estimation [7.599399338954307]
Two-Stage (TS) Approach can be applied to obtain reliable parametric estimates.
We show how to apply the TS approach on models for independent and identically distributed samples.
arXiv Detail & Related papers (2022-03-31T18:19:47Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - New Algorithms And Fast Implementations To Approximate Stochastic
Processes [0.0]
We present new algorithms and fast implementations to find efficient approximations for modelling processes.
The goal is always to find a finite model, which represents a given knowledge about the real data process as accurate as possible.
arXiv Detail & Related papers (2020-12-01T06:14:16Z) - Marginal likelihood computation for model selection and hypothesis
testing: an extensive review [66.37504201165159]
This article provides a comprehensive study of the state-of-the-art of the topic.
We highlight limitations, benefits, connections and differences among the different techniques.
Problems and possible solutions with the use of improper priors are also described.
arXiv Detail & Related papers (2020-05-17T18:31:58Z) - On Contrastive Learning for Likelihood-free Inference [20.49671736540948]
Likelihood-free methods perform parameter inference in simulator models where evaluating the likelihood is intractable.
One class of methods for this likelihood-free problem uses a classifier to distinguish between pairs of parameter-observation samples.
Another popular class of methods fits a conditional distribution to the parameter posterior directly, and a particular recent variant allows for the use of flexible neural density estimators.
arXiv Detail & Related papers (2020-02-10T13:14:01Z) - Unbiased and Efficient Log-Likelihood Estimation with Inverse Binomial
Sampling [9.66840768820136]
inverse binomial sampling (IBS) can estimate the log-likelihood of an entire data set efficiently and without bias.
IBS produces lower error in the estimated parameters and maximum log-likelihood values than alternative sampling methods.
arXiv Detail & Related papers (2020-01-12T19:51:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.