Improving the Accuracy of Marginal Approximations in Likelihood-Free
Inference via Localisation
- URL: http://arxiv.org/abs/2207.06655v1
- Date: Thu, 14 Jul 2022 04:56:44 GMT
- Title: Improving the Accuracy of Marginal Approximations in Likelihood-Free
Inference via Localisation
- Authors: Christopher Drovandi, David J Nott, David T Frazier
- Abstract summary: A promising approach to high-dimensional likelihood-free inference involves estimating low-dimensional marginal posteriors.
We show that such low-dimensional approximations can be surprisingly poor in practice for seemingly intuitive summary statistic choices.
We suggest an alternative approach to marginal estimation which is easier to implement and automate.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Likelihood-free methods are an essential tool for performing inference for
implicit models which can be simulated from, but for which the corresponding
likelihood is intractable. However, common likelihood-free methods do not scale
well to a large number of model parameters. A promising approach to
high-dimensional likelihood-free inference involves estimating low-dimensional
marginal posteriors by conditioning only on summary statistics believed to be
informative for the low-dimensional component, and then combining the
low-dimensional approximations in some way. In this paper, we demonstrate that
such low-dimensional approximations can be surprisingly poor in practice for
seemingly intuitive summary statistic choices. We describe an idealized
low-dimensional summary statistic that is, in principle, suitable for marginal
estimation. However, a direct approximation of the idealized choice is
difficult in practice. We thus suggest an alternative approach to marginal
estimation which is easier to implement and automate. Given an initial choice
of low-dimensional summary statistic that might only be informative about a
marginal posterior location, the new method improves performance by first
crudely localising the posterior approximation using all the summary statistics
to ensure global identifiability, followed by a second step that hones in on an
accurate low-dimensional approximation using the low-dimensional summary
statistic. We show that the posterior this approach targets can be represented
as a logarithmic pool of posterior distributions based on the low-dimensional
and full summary statistics, respectively. The good performance of our method
is illustrated in several examples.
Related papers
- Probabilistic Iterative Hard Thresholding for Sparse Learning [2.5782973781085383]
We present an approach towards solving expectation objective optimization problems with cardinality constraints.
We prove convergence of the underlying process, and demonstrate the performance on two Machine Learning problems.
arXiv Detail & Related papers (2024-09-02T18:14:45Z) - Scalable and non-iterative graphical model estimation [3.187381965457262]
Iterative Proportional Fitting (IPF) and its variants are the default method for undirected graphical model estimation.
We propose a novel and fast non-iterative method for positive definite graphical model estimation in high dimensions.
arXiv Detail & Related papers (2024-08-21T15:46:00Z) - On the design-dependent suboptimality of the Lasso [27.970033039287884]
We show that the Lasso estimator is provably minimax rate-suboptimal when the minimum singular value is small.
Our lower bound is strong enough to preclude the sparse statistical optimality of all forms of the Lasso.
arXiv Detail & Related papers (2024-02-01T07:01:54Z) - A Dimensionality Reduction Method for Finding Least Favorable Priors
with a Focus on Bregman Divergence [108.28566246421742]
This paper develops a dimensionality reduction method that allows us to move the optimization to a finite-dimensional setting with an explicit bound on the dimension.
In order to make progress on the problem, we restrict ourselves to Bayesian risks induced by a relatively large class of loss functions, namely Bregman divergences.
arXiv Detail & Related papers (2022-02-23T16:22:28Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Barely Biased Learning for Gaussian Process Regression [19.772149500352945]
We suggest a method that adaptively selects the amount of computation to use when estimating the log marginal likelihood.
While simple in principle, our current implementation of the method is not competitive with existing approximations.
arXiv Detail & Related papers (2021-09-20T10:35:59Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Continuous Wasserstein-2 Barycenter Estimation without Minimax
Optimization [94.18714844247766]
Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport.
We present a scalable algorithm to compute Wasserstein-2 barycenters given sample access to the input measures.
arXiv Detail & Related papers (2021-02-02T21:01:13Z) - Maximum sampled conditional likelihood for informative subsampling [4.708378681950648]
Subsampling is a computationally effective approach to extract information from massive data sets when computing resources are limited.
We propose to use the maximum maximum conditional likelihood estimator (MSCLE) based on the sampled data.
arXiv Detail & Related papers (2020-11-11T16:01:17Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.