Semi-Empirical Objective Functions for MCMC Proposal Optimization
- URL: http://arxiv.org/abs/2106.02104v1
- Date: Thu, 3 Jun 2021 19:52:56 GMT
- Title: Semi-Empirical Objective Functions for MCMC Proposal Optimization
- Authors: Chris Cannella, Vahid Tarokh
- Abstract summary: We introduce and demonstrate a semi-empirical procedure for determining approximate objective functions suitable for optimizing arbitrarily parameterized proposal distributions.
We argue that Ab Initio objective functions are sufficiently robust to enable the confident optimization of MCMC proposal distributions parameterized by deep generative networks.
- Score: 31.189518729816474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce and demonstrate a semi-empirical procedure for determining
approximate objective functions suitable for optimizing arbitrarily
parameterized proposal distributions in MCMC methods. Our proposed Ab Initio
objective functions consist of the weighted combination of functions following
constraints on their global optima and of coordinate invariance that we argue
should be upheld by general measures of MCMC efficiency for use in proposal
optimization. The coefficients of Ab Initio objective functions are determined
so as to recover the optimal MCMC behavior prescribed by established
theoretical analysis for chosen reference problems. Our experimental results
demonstrate that Ab Initio objective functions maintain favorable performance
and preferable optimization behavior compared to existing objective functions
for MCMC optimization when optimizing highly expressive proposal distributions.
We argue that Ab Initio objective functions are sufficiently robust to enable
the confident optimization of MCMC proposal distributions parameterized by deep
generative networks that extend beyond the traditional limitations of
individual MCMC schemes.
Related papers
- End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Parameterized Convex Minorant for Objective Function Approximation in
Amortized Optimization [0.897438370260135]
A convex minorant (PCM) method is proposed for the approximation of the objective function in amortized optimization.
In the proposed method, the objective function approximator is expressed by the sum of a PCM and a nonnegative gap function, where the objective approximator is bounded from below by the convex PCM in the PCM.
The proposed objective approximator is a universal approximator for the PCM, and globalexpr of the PCM attains the global minimum of the objective function approximator.
arXiv Detail & Related papers (2023-10-04T01:34:36Z) - Deterministic Langevin Unconstrained Optimization with Normalizing Flows [3.988614978933934]
We introduce a global, free surrogate optimization strategy for black-box functions inspired by the Fokker-Planck and Langevin equations.
We demonstrate superior competitive progress toward objective optima on standard synthetic test functions.
arXiv Detail & Related papers (2023-10-01T17:46:20Z) - End-to-End Stochastic Optimization with Energy-Based Model [18.60842637575249]
Decision-focused learning (DFL) was recently proposed for objective optimization problems that involve unknown parameters.
We propose SO-EBM, a general and efficient DFL method for layer optimization using energy-based models.
arXiv Detail & Related papers (2022-11-25T00:14:12Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - A General Recipe for Likelihood-free Bayesian Optimization [115.82591413062546]
We propose likelihood-free BO (LFBO) to extend BO to a broader class of models and utilities.
LFBO directly models the acquisition function without having to separately perform inference with a probabilistic surrogate model.
We show that computing the acquisition function in LFBO can be reduced to optimizing a weighted classification problem.
arXiv Detail & Related papers (2022-06-27T03:55:27Z) - Learning Implicit Priors for Motion Optimization [105.11889448885226]
Energy-based Models (EBM) represent expressive probability density distributions.
We present a set of required modeling and algorithmic choices to adapt EBMs into motion optimization.
arXiv Detail & Related papers (2022-04-11T19:14:54Z) - Non-Convex Optimization with Certificates and Fast Rates Through Kernel
Sums of Squares [68.8204255655161]
We consider potentially non- optimized approximation problems.
In this paper, we propose an algorithm that achieves close to optimal a priori computational guarantees.
arXiv Detail & Related papers (2022-04-11T09:37:04Z) - Implicit Rate-Constrained Optimization of Non-decomposable Objectives [37.43791617018009]
We consider a family of constrained optimization problems arising in machine learning.
Our key idea is to formulate a rate-constrained optimization that expresses the threshold parameter as a function of the model parameters.
We show how the resulting optimization problem can be solved using standard gradient based methods.
arXiv Detail & Related papers (2021-07-23T00:04:39Z) - Are we Forgetting about Compositional Optimisers in Bayesian
Optimisation? [66.39551991177542]
This paper presents a sample methodology for global optimisation.
Within this, a crucial performance-determiningtrivial is maximising the acquisition function.
We highlight the empirical advantages of the approach to optimise functionation across 3958 individual experiments.
arXiv Detail & Related papers (2020-12-15T12:18:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.