Local Bayesian optimization via maximizing probability of descent
- URL: http://arxiv.org/abs/2210.11662v1
- Date: Fri, 21 Oct 2022 01:13:14 GMT
- Title: Local Bayesian optimization via maximizing probability of descent
- Authors: Quan Nguyen, Kaiwen Wu, Jacob R. Gardner and Roman Garnett
- Abstract summary: Local optimization is a promising approach to expensive, high-dimensional black-box optimization.
We show that, surprisingly, the expected value of the gradient is not always the direction maximizing the probability of descent.
This observation inspires an elegant optimization scheme seeking to maximize the probability of descent while moving in the direction of most-probable descent.
- Score: 26.82385325186729
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Local optimization presents a promising approach to expensive,
high-dimensional black-box optimization by sidestepping the need to globally
explore the search space. For objective functions whose gradient cannot be
evaluated directly, Bayesian optimization offers one solution -- we construct a
probabilistic model of the objective, design a policy to learn about the
gradient at the current location, and use the resulting information to navigate
the objective landscape. Previous work has realized this scheme by minimizing
the variance in the estimate of the gradient, then moving in the direction of
the expected gradient. In this paper, we re-examine and refine this approach.
We demonstrate that, surprisingly, the expected value of the gradient is not
always the direction maximizing the probability of descent, and in fact, these
directions may be nearly orthogonal. This observation then inspires an elegant
optimization scheme seeking to maximize the probability of descent while moving
in the direction of most-probable descent. Experiments on both synthetic and
real-world objectives show that our method outperforms previous realizations of
this optimization scheme and is competitive against other, significantly more
complicated baselines.
Related papers
- Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation [53.611206368815125]
We propose a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation.
We learn an angular distance field based on local plane geometry to refine the coarse gradient vectors.
Our method efficiently conducts global gradient approximation while achieving better accuracy and ability generalization of local feature description.
arXiv Detail & Related papers (2023-09-17T08:35:11Z) - High Probability Analysis for Non-Convex Stochastic Optimization with
Clipping [13.025261730510847]
gradient clipping is a technique for dealing with the heavy-tailed neural networks.
Most theoretical guarantees only provide an in-expectation analysis and only on the performance.
Our analysis provides a relatively complete picture for the theoretical guarantee of optimization algorithms with gradient clipping.
arXiv Detail & Related papers (2023-07-25T17:36:56Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - A Particle-based Sparse Gaussian Process Optimizer [5.672919245950197]
We present a new swarm-swarm-based framework utilizing the underlying dynamical process of descent.
The biggest advantage of this approach is greater exploration around the current state before deciding descent descent.
arXiv Detail & Related papers (2022-11-26T09:06:15Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Self-Tuning Stochastic Optimization with Curvature-Aware Gradient
Filtering [53.523517926927894]
We explore the use of exact per-sample Hessian-vector products and gradients to construct self-tuning quadratics.
We prove that our model-based procedure converges in noisy gradient setting.
This is an interesting step for constructing self-tuning quadratics.
arXiv Detail & Related papers (2020-11-09T22:07:30Z) - Channel-Directed Gradients for Optimization of Convolutional Neural
Networks [50.34913837546743]
We introduce optimization methods for convolutional neural networks that can be used to improve existing gradient-based optimization in terms of generalization error.
We show that defining the gradients along the output channel direction leads to a performance boost, while other directions can be detrimental.
arXiv Detail & Related papers (2020-08-25T00:44:09Z) - An adaptive stochastic gradient-free approach for high-dimensional
blackbox optimization [0.0]
We propose an adaptive gradient-free (ASGF) approach for high-dimensional non-smoothing problems.
We illustrate the performance of this method on benchmark global problems and learning tasks.
arXiv Detail & Related papers (2020-06-18T22:47:58Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.