Regret analysis of the Piyavskii-Shubert algorithm for global Lipschitz
optimization
- URL: http://arxiv.org/abs/2002.02390v4
- Date: Mon, 4 Jul 2022 22:26:11 GMT
- Title: Regret analysis of the Piyavskii-Shubert algorithm for global Lipschitz
optimization
- Authors: Cl\'ement Bouttier, Tommaso Cesari (TSE), M\'elanie Ducoffe,
S\'ebastien Gerchinovitz (IMT)
- Abstract summary: We study a natural algorithm designed originally by Piyavskii and Shubert in 1972.
We prove new bounds on the number of evaluations of the function needed to reach or certify a given optimization accuracy.
- Score: 1.6822770693792826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of maximizing a non-concave Lipschitz multivariate
function over a compact domain by sequentially querying its (possibly
perturbed) values. We study a natural algorithm designed originally by
Piyavskii and Shubert in 1972, for which we prove new bounds on the number of
evaluations of the function needed to reach or certify a given optimization
accuracy. Our analysis uses a bandit-optimization viewpoint and solves an open
problem from Hansen et al.\ (1991) by bounding the number of evaluations to
certify a given accuracy with a near-optimal sum of packing numbers.
Related papers
- Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Certified Multi-Fidelity Zeroth-Order Optimization [4.450536872346658]
We consider the problem of multi-fidelity zeroth-order optimization, where one can evaluate a function $f$ at various approximation levels.
We propose a certified variant of the MFDOO algorithm and derive a bound on its cost complexity for any Lipschitz function $f$.
We also prove an $f$-dependent lower bound showing that this algorithm has a near-optimal cost complexity.
arXiv Detail & Related papers (2023-08-02T07:20:37Z) - Efficient Minimax Optimal Global Optimization of Lipschitz Continuous
Multivariate Functions [0.0]
We show that our algorithm achieves an average regretO(LstnT-frac1n)$ for the horizon for the Lipschitz continuous functions.
arXiv Detail & Related papers (2022-06-06T06:28:38Z) - Optimal Algorithms for Mean Estimation under Local Differential Privacy [55.32262879188817]
We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
arXiv Detail & Related papers (2022-05-05T06:43:46Z) - Non-Convex Optimization with Certificates and Fast Rates Through Kernel
Sums of Squares [68.8204255655161]
We consider potentially non- optimized approximation problems.
In this paper, we propose an algorithm that achieves close to optimal a priori computational guarantees.
arXiv Detail & Related papers (2022-04-11T09:37:04Z) - Low Regret Binary Sampling Method for Efficient Global Optimization of
Univariate Functions [0.0]
We study the cumulative regret of the algorithm instead of the simple regret between our best query and the optimal value of the objective function.
Although our approach has similar regret results with the traditional lower-bounding algorithms, it has a major computational cost advantage.
arXiv Detail & Related papers (2022-01-18T18:11:48Z) - Cumulative Regret Analysis of the Piyavskii--Shubert Algorithm and Its
Variants for Global Optimization [0.0]
We study the problem of global optimization, where we analyze the performance of the Piyavskii--Shubert algorithm and its variants.
We show that, our algorithm efficiently determines its queries; and achieves nearly minimax optimal (up to log factors) cumulative regret.
arXiv Detail & Related papers (2021-08-24T17:36:33Z) - Regret Bounds for Gaussian-Process Optimization in Large Domains [40.92207267407271]
We provide upper bounds on the suboptimality (Bayesian simple regret) of the solution found by optimization strategies.
These regret bounds illuminate the relationship between the number of evaluations, the domain size, and the optimality of the retrieved function value.
In particular, they show that even when the number of evaluations is far too small to find the global optimum, we can find nontrivial function values.
arXiv Detail & Related papers (2021-04-29T05:19:03Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.