Multi-fidelity Monte Carlo: a pseudo-marginal approach
- URL: http://arxiv.org/abs/2210.01534v1
- Date: Tue, 4 Oct 2022 11:27:40 GMT
- Title: Multi-fidelity Monte Carlo: a pseudo-marginal approach
- Authors: Diana Cai and Ryan P. Adams
- Abstract summary: A key challenge in applying Monte Carlo to scientific domains is computation.
Multi-fidelity MCMC algorithms combine models of varying fidelities in order to obtain an approximate target density.
We take a pseudo-marginal MCMC approach for multi-fidelity inference that utilizes a cheaper, randomized-fidelity unbiased estimator.
- Score: 21.05263506153674
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Markov chain Monte Carlo (MCMC) is an established approach for uncertainty
quantification and propagation in scientific applications. A key challenge in
applying MCMC to scientific domains is computation: the target density of
interest is often a function of expensive computations, such as a high-fidelity
physical simulation, an intractable integral, or a slowly-converging iterative
algorithm. Thus, using an MCMC algorithms with an expensive target density
becomes impractical, as these expensive computations need to be evaluated at
each iteration of the algorithm. In practice, these computations often
approximated via a cheaper, low-fidelity computation, leading to bias in the
resulting target density. Multi-fidelity MCMC algorithms combine models of
varying fidelities in order to obtain an approximate target density with lower
computational cost. In this paper, we describe a class of asymptotically exact
multi-fidelity MCMC algorithms for the setting where a sequence of models of
increasing fidelity can be computed that approximates the expensive target
density of interest. We take a pseudo-marginal MCMC approach for multi-fidelity
inference that utilizes a cheaper, randomized-fidelity unbiased estimator of
the target fidelity constructed via random truncation of a telescoping series
of the low-fidelity sequence of models. Finally, we discuss and evaluate the
proposed multi-fidelity MCMC approach on several applications, including
log-Gaussian Cox process modeling, Bayesian ODE system identification,
PDE-constrained optimization, and Gaussian process regression parameter
inference.
Related papers
- Accelerating Multilevel Markov Chain Monte Carlo Using Machine Learning Models [0.0]
We present an efficient approach for accelerating multilevel Markov Chain Monte Carlo (MCMC) sampling for large-scale problems.
We use low-fidelity machine learning models for inexpensive evaluation of proposed samples.
Our technique is demonstrated on a standard benchmark inference problem in groundwater flow.
arXiv Detail & Related papers (2024-05-18T05:13:11Z) - Multi-fidelity Hamiltonian Monte Carlo [1.86413150130483]
We propose a novel two-stage Hamiltonian Monte Carlo algorithm with a surrogate model.
The accepted probability is computed in the first stage via a standard HMC proposal.
If the proposal is accepted, the posterior is evaluated in the second stage using the high-fidelity numerical solver.
arXiv Detail & Related papers (2024-05-08T13:03:55Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Fast Doubly-Adaptive MCMC to Estimate the Gibbs Partition Function with
Weak Mixing Time Bounds [7.428782604099876]
A major obstacle to practical applications of Gibbs distributions is the need to estimate their partition functions.
We present a novel method for reducing the computational complexity of rigorously estimating the partition functions.
arXiv Detail & Related papers (2021-11-14T15:42:02Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - An adaptive Hessian approximated stochastic gradient MCMC method [12.93317525451798]
We present an adaptive Hessian approximated gradient MCMC method to incorporate local geometric information while sampling from the posterior.
We adopt a magnitude-based weight pruning method to enforce the sparsity of the network.
arXiv Detail & Related papers (2020-10-03T16:22:15Z) - Gaussian Mixture Reduction with Composite Transportation Divergence [15.687740538194413]
We propose a novel optimization-based GMR method based on composite transportation divergence (CTD)
We develop a majorization-minimization algorithm for computing the reduced mixture and establish its theoretical convergence.
Our unified framework empowers users to select the most appropriate cost function in CTD to achieve superior performance.
arXiv Detail & Related papers (2020-02-19T19:52:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.