Efficient Minimax Optimal Global Optimization of Lipschitz Continuous
Multivariate Functions
- URL: http://arxiv.org/abs/2206.02383v1
- Date: Mon, 6 Jun 2022 06:28:38 GMT
- Title: Efficient Minimax Optimal Global Optimization of Lipschitz Continuous
Multivariate Functions
- Authors: Kaan Gokcesu, Hakan Gokcesu
- Abstract summary: We show that our algorithm achieves an average regretO(LstnT-frac1n)$ for the horizon for the Lipschitz continuous functions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose an efficient minimax optimal global optimization
algorithm for multivariate Lipschitz continuous functions. To evaluate the
performance of our approach, we utilize the average regret instead of the
traditional simple regret, which, as we show, is not suitable for use in the
multivariate non-convex optimization because of the inherent hardness of the
problem itself. Since we study the average regret of the algorithm, our results
directly imply a bound for the simple regret as well. Instead of constructing
lower bounding proxy functions, our method utilizes a predetermined query
creation rule, which makes it computationally superior to the Piyavskii-Shubert
variants. We show that our algorithm achieves an average regret bound of
$O(L\sqrt{n}T^{-\frac{1}{n}})$ for the optimization of an $n$-dimensional
$L$-Lipschitz continuous objective in a time horizon $T$, which we show to be
minimax optimal.
Related papers
- Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Efficient Lipschitzian Global Optimization of H\"older Continuous
Multivariate Functions [0.0]
This study presents an effective global optimization technique designed for multivariate functions that are H"older continuous.
We show that the algorithm attains an average regret bound of $O(T-fracalphan)$ for optimizing a H"older continuous target function with H"older $alpha$ in an $n$-dimensional space within a given time horizon.
arXiv Detail & Related papers (2023-03-24T22:29:35Z) - A Near-Optimal Algorithm for Univariate Zeroth-Order Budget Convex
Optimization [4.608510640547952]
We prove near-optimal optimization error guarantees for Dy Search.
We show that the classical dependence on the global Lipschitz constant in the error bounds is an artifact of the granularity of the budget.
arXiv Detail & Related papers (2022-08-13T19:57:04Z) - Low Regret Binary Sampling Method for Efficient Global Optimization of
Univariate Functions [0.0]
We study the cumulative regret of the algorithm instead of the simple regret between our best query and the optimal value of the objective function.
Although our approach has similar regret results with the traditional lower-bounding algorithms, it has a major computational cost advantage.
arXiv Detail & Related papers (2022-01-18T18:11:48Z) - Cumulative Regret Analysis of the Piyavskii--Shubert Algorithm and Its
Variants for Global Optimization [0.0]
We study the problem of global optimization, where we analyze the performance of the Piyavskii--Shubert algorithm and its variants.
We show that, our algorithm efficiently determines its queries; and achieves nearly minimax optimal (up to log factors) cumulative regret.
arXiv Detail & Related papers (2021-08-24T17:36:33Z) - Stochastic Bias-Reduced Gradient Methods [44.35885731095432]
We develop a new primitive for optimization: a low-bias, low-cost smoothing of ther $x_star$ of any bound of the Moreau-Yoshida function.
arXiv Detail & Related papers (2021-06-17T13:33:05Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.