Deterministic Langevin Unconstrained Optimization with Normalizing Flows
- URL: http://arxiv.org/abs/2310.00745v1
- Date: Sun, 1 Oct 2023 17:46:20 GMT
- Title: Deterministic Langevin Unconstrained Optimization with Normalizing Flows
- Authors: James M. Sullivan, Uros Seljak
- Abstract summary: We introduce a global, free surrogate optimization strategy for black-box functions inspired by the Fokker-Planck and Langevin equations.
We demonstrate superior competitive progress toward objective optima on standard synthetic test functions.
- Score: 3.988614978933934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a global, gradient-free surrogate optimization strategy for
expensive black-box functions inspired by the Fokker-Planck and Langevin
equations. These can be written as an optimization problem where the objective
is the target function to maximize minus the logarithm of the current density
of evaluated samples. This objective balances exploitation of the target
objective with exploration of low-density regions. The method, Deterministic
Langevin Optimization (DLO), relies on a Normalizing Flow density estimate to
perform active learning and select proposal points for evaluation. This
strategy differs qualitatively from the widely-used acquisition functions
employed by Bayesian Optimization methods, and can accommodate a range of
surrogate choices. We demonstrate superior or competitive progress toward
objective optima on standard synthetic test functions, as well as on non-convex
and multi-modal posteriors of moderate dimension. On real-world objectives,
such as scientific and neural network hyperparameter optimization, DLO is
competitive with state-of-the-art baselines.
Related papers
- Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - MONGOOSE: Path-wise Smooth Bayesian Optimisation via Meta-learning [29.97648417539237]
A primary contributor to the cost of evaluating black-box objective functions is often the effort required to prepare the system for measurement.
We consider a common scenario where preparation costs grow as the distance between successive evaluations increases.
Our algorithm, MONGOOSE, uses a meta-learnt parametric policy to generate smooth optimisation trajectories.
arXiv Detail & Related papers (2023-02-22T18:20:36Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Bayesian Optimization with Informative Covariance [13.113313427848828]
We propose novel informative covariance functions for optimization, leveraging nonstationarity to encode preferences for certain regions of the search space.
We demonstrate that the proposed functions can increase the sample efficiency of Bayesian optimization in high dimensions, even under weak prior information.
arXiv Detail & Related papers (2022-08-04T15:05:11Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - On the Global Optimality of Model-Agnostic Meta-Learning [133.16370011229776]
Model-a meta-learning (MAML) formulates meta-learning as a bilevel optimization problem, where the inner level solves each subtask based on a shared prior.
We characterize optimality of the stationary points attained by MAML for both learning and supervised learning, where the inner-level outer-level problems are solved via first-order optimization methods.
arXiv Detail & Related papers (2020-06-23T17:33:14Z) - An adaptive stochastic gradient-free approach for high-dimensional
blackbox optimization [0.0]
We propose an adaptive gradient-free (ASGF) approach for high-dimensional non-smoothing problems.
We illustrate the performance of this method on benchmark global problems and learning tasks.
arXiv Detail & Related papers (2020-06-18T22:47:58Z) - Composition of kernel and acquisition functions for High Dimensional
Bayesian Optimization [0.1749935196721634]
We use the addition-ality of the objective function into mapping both the kernel and the acquisition function of the Bayesian Optimization.
This ap-proach makes more efficient the learning/updating of the probabilistic surrogate model.
Results are presented for real-life application, that is the control of pumps in urban water distribution systems.
arXiv Detail & Related papers (2020-03-09T15:45:57Z) - Mixed Strategies for Robust Optimization of Unknown Objectives [93.8672371143881]
We consider robust optimization problems, where the goal is to optimize an unknown objective function against the worst-case realization of an uncertain parameter.
We design a novel sample-efficient algorithm GP-MRO, which sequentially learns about the unknown objective from noisy point evaluations.
GP-MRO seeks to discover a robust and randomized mixed strategy, that maximizes the worst-case expected objective value.
arXiv Detail & Related papers (2020-02-28T09:28:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.