Low Regret Binary Sampling Method for Efficient Global Optimization of
Univariate Functions
- URL: http://arxiv.org/abs/2201.07164v1
- Date: Tue, 18 Jan 2022 18:11:48 GMT
- Title: Low Regret Binary Sampling Method for Efficient Global Optimization of
Univariate Functions
- Authors: Kaan Gokcesu, Hakan Gokcesu
- Abstract summary: We study the cumulative regret of the algorithm instead of the simple regret between our best query and the optimal value of the objective function.
Although our approach has similar regret results with the traditional lower-bounding algorithms, it has a major computational cost advantage.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose a computationally efficient algorithm for the
problem of global optimization in univariate loss functions. For the
performance evaluation, we study the cumulative regret of the algorithm instead
of the simple regret between our best query and the optimal value of the
objective function. Although our approach has similar regret results with the
traditional lower-bounding algorithms such as the Piyavskii-Shubert method for
the Lipschitz continuous or Lipschitz smooth functions, it has a major
computational cost advantage. In Piyavskii-Shubert method, for certain types of
functions, the query points may be hard to determine (as they are solutions to
additional optimization problems). However, this issue is circumvented in our
binary sampling approach, where the sampling set is predetermined irrespective
of the function characteristics. For a search space of $[0,1]$, our approach
has at most $L\log (3T)$ and $2.25H$ regret for $L$-Lipschitz continuous and
$H$-Lipschitz smooth functions respectively. We also analytically extend our
results for a broader class of functions that covers more complex regularity
conditions.
Related papers
- Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Gradient-free optimization of highly smooth functions: improved analysis
and a new algorithm [87.22224691317766]
This work studies problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth.
We consider two kinds of zero-order projected gradient descent algorithms.
arXiv Detail & Related papers (2023-06-03T17:05:13Z) - Efficient Lipschitzian Global Optimization of H\"older Continuous
Multivariate Functions [0.0]
This study presents an effective global optimization technique designed for multivariate functions that are H"older continuous.
We show that the algorithm attains an average regret bound of $O(T-fracalphan)$ for optimizing a H"older continuous target function with H"older $alpha$ in an $n$-dimensional space within a given time horizon.
arXiv Detail & Related papers (2023-03-24T22:29:35Z) - Deterministic Nonsmooth Nonconvex Optimization [94.01526844386977]
We show that randomization is necessary to obtain a dimension-free dimension-free algorithm.
Our algorithm yields the first deterministic dimension-free algorithm for optimizing ReLU networks.
arXiv Detail & Related papers (2023-02-16T13:57:19Z) - Efficient Minimax Optimal Global Optimization of Lipschitz Continuous
Multivariate Functions [0.0]
We show that our algorithm achieves an average regretO(LstnT-frac1n)$ for the horizon for the Lipschitz continuous functions.
arXiv Detail & Related papers (2022-06-06T06:28:38Z) - Cumulative Regret Analysis of the Piyavskii--Shubert Algorithm and Its
Variants for Global Optimization [0.0]
We study the problem of global optimization, where we analyze the performance of the Piyavskii--Shubert algorithm and its variants.
We show that, our algorithm efficiently determines its queries; and achieves nearly minimax optimal (up to log factors) cumulative regret.
arXiv Detail & Related papers (2021-08-24T17:36:33Z) - Ada-BKB: Scalable Gaussian Process Optimization on Continuous Domain by
Adaptive Discretization [21.859940486704264]
An algorithm such as GPUCB has prohibitive computational complexity.
A norere algorithm for functions corroborates the real problem of continuous optimization.
arXiv Detail & Related papers (2021-06-16T07:55:45Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.