Efficient Lipschitzian Global Optimization of H\"older Continuous
Multivariate Functions
- URL: http://arxiv.org/abs/2303.14293v1
- Date: Fri, 24 Mar 2023 22:29:35 GMT
- Title: Efficient Lipschitzian Global Optimization of H\"older Continuous
Multivariate Functions
- Authors: Kaan Gokcesu, Hakan Gokcesu
- Abstract summary: This study presents an effective global optimization technique designed for multivariate functions that are H"older continuous.
We show that the algorithm attains an average regret bound of $O(T-fracalphan)$ for optimizing a H"older continuous target function with H"older $alpha$ in an $n$-dimensional space within a given time horizon.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study presents an effective global optimization technique designed for
multivariate functions that are H\"older continuous. Unlike traditional methods
that construct lower bounding proxy functions, this algorithm employs a
predetermined query creation rule that makes it computationally superior. The
algorithm's performance is assessed using the average or cumulative regret,
which also implies a bound for the simple regret and reflects the overall
effectiveness of the approach. The results show that with appropriate
parameters the algorithm attains an average regret bound of
$O(T^{-\frac{\alpha}{n}})$ for optimizing a H\"older continuous target function
with H\"older exponent $\alpha$ in an $n$-dimensional space within a given time
horizon $T$. We demonstrate that this bound is minimax optimal.
Related papers
- Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - On Convergence of Incremental Gradient for Non-Convex Smooth Functions [63.51187646914962]
In machine learning and network optimization, algorithms like shuffle SGD are popular due to minimizing the number of misses and good cache.
This paper delves into the convergence properties SGD algorithms with arbitrary data ordering.
arXiv Detail & Related papers (2023-05-30T17:47:27Z) - Provably Efficient Reinforcement Learning via Surprise Bound [66.15308700413814]
We propose a provably efficient reinforcement learning algorithm (both computationally and statistically) with general value function approximations.
Our algorithm achieves reasonable regret bounds when applied to both the linear setting and the sparse high-dimensional linear setting.
arXiv Detail & Related papers (2023-02-22T20:21:25Z) - Efficient Minimax Optimal Global Optimization of Lipschitz Continuous
Multivariate Functions [0.0]
We show that our algorithm achieves an average regretO(LstnT-frac1n)$ for the horizon for the Lipschitz continuous functions.
arXiv Detail & Related papers (2022-06-06T06:28:38Z) - Low Regret Binary Sampling Method for Efficient Global Optimization of
Univariate Functions [0.0]
We study the cumulative regret of the algorithm instead of the simple regret between our best query and the optimal value of the objective function.
Although our approach has similar regret results with the traditional lower-bounding algorithms, it has a major computational cost advantage.
arXiv Detail & Related papers (2022-01-18T18:11:48Z) - Ada-BKB: Scalable Gaussian Process Optimization on Continuous Domain by
Adaptive Discretization [21.859940486704264]
An algorithm such as GPUCB has prohibitive computational complexity.
A norere algorithm for functions corroborates the real problem of continuous optimization.
arXiv Detail & Related papers (2021-06-16T07:55:45Z) - Parameter-free Stochastic Optimization of Variationally Coherent
Functions [19.468067110814808]
We design and analyze an algorithm for first-order optimization of a class of functions on $mathbbRdilon.
It is the first to achieve both these at the same time.
arXiv Detail & Related papers (2021-01-30T15:05:34Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - A Domain-Shrinking based Bayesian Optimization Algorithm with
Order-Optimal Regret Performance [16.0251555430107]
This is the first GP-based algorithm with an order-optimal regret guarantee.
Compared with the prevailing GP-UCB family of algorithms, the proposed algorithm reduces computational complexity by a factor of $O(T2d-1)$.
arXiv Detail & Related papers (2020-10-27T02:15:15Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.