Minimisation of Quasar-Convex Functions Using Random Zeroth-Order Oracles
- URL: http://arxiv.org/abs/2505.02281v1
- Date: Sun, 04 May 2025 22:43:57 GMT
- Title: Minimisation of Quasar-Convex Functions Using Random Zeroth-Order Oracles
- Authors: Amir Ali Farzin, Yuen-Man Pun, Iman Shames,
- Abstract summary: We show the performance of a random Gaussian smoothing zeroth-order (ZO) scheme for minimising a quasarsar Theoretical (QC) functions in both unconstrained and constrained settings.<n>Findings are illustrated through investigating the performance of the algorithm applied to a range of problems in machine learning.
- Score: 1.712317731332484
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This study explores the performance of a random Gaussian smoothing zeroth-order (ZO) scheme for minimising quasar-convex (QC) and strongly quasar-convex (SQC) functions in both unconstrained and constrained settings. For the unconstrained problem, we establish the ZO algorithm's convergence to a global minimum along with its complexity when applied to both QC and SQC functions. For the constrained problem, we introduce the new notion of proximal-quasar-convexity and prove analogous results to the unconstrained case. Specifically, we show the complexity bounds and the convergence of the algorithm to a neighbourhood of a global minimum whose size can be controlled under a variance reduction scheme. Theoretical findings are illustrated through investigating the performance of the algorithm applied to a range of problems in machine learning and optimisation. Specifically, we observe scenarios where the ZO method outperforms gradient descent. We provide a possible explanation for this phenomenon.
Related papers
- Single-loop Algorithms for Stochastic Non-convex Optimization with Weakly-Convex Constraints [49.76332265680669]
This paper examines a crucial subset of problems where both the objective and constraint functions are weakly convex.<n>Existing methods often face limitations, including slow convergence rates or reliance on double-loop designs.<n>We introduce a novel single-loop penalty-based algorithm to overcome these challenges.
arXiv Detail & Related papers (2025-04-21T17:15:48Z) - Min-Max Optimisation for Nonconvex-Nonconcave Functions Using a Random Zeroth-Order Extragradient Algorithm [1.8783693815869542]
We consider both unconstrained and constrained, differentiable and non-differentiable settings.<n>For the unconstrained problem, we establish the convergence of the ZO-EG algorithm to the neighbourhood of an $epsilon$-stationary point of the NC-NC objective function.<n>For the non-differentiable case, we prove the convergence of the ZO-EG algorithm to a neighbourhood of an $epsilon$-stationary point of the smoothed version of the objective function.
arXiv Detail & Related papers (2025-04-10T02:15:30Z) - New Lower Bounds for Stochastic Non-Convex Optimization through Divergence Composition [11.530542389959347]
We present fundamental limits of first-order optimization in a range of non-dimensional settings, including L-Convexity (QC), Quadratic Growth (smoothQG), and Restricted Inequalities (RSI)
arXiv Detail & Related papers (2025-02-19T19:21:00Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Minimisation of Polyak-Ćojasewicz Functions Using Random Zeroth-Order Oracles [1.5101132008238316]
The framework is based on exploiting a random oracle to estimate the gradient function.
The convergence of the algorithm to a global minimum in the unconstrained case and to a neighbourhood of the global minimum in the constrained case is presented.
arXiv Detail & Related papers (2024-05-15T05:43:16Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Universal Gradient Descent Ascent Method for Nonconvex-Nonconcave
Minimax Optimization [22.392563619845212]
Non-nonconcave minimax optimization has received intense attention the last decade due to its broad applications in machine learning.
We propose a novel applicable single-loop, dual algorithm, which is able to balance uniformly and doubly the gradient.
Specifically, under the one-sided KL condition with exponent $thetain(0,1)$, DS-GDA converges with an $mathcalO(eps-2$2$1) results.
arXiv Detail & Related papers (2022-12-26T00:28:07Z) - Optimal Algorithms for Stochastic Complementary Composite Minimization [55.26935605535377]
Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization.
We provide novel excess risk bounds, both in expectation and with high probability.
Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems.
arXiv Detail & Related papers (2022-11-03T12:40:24Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained
Optimization [25.73397307080647]
We present a new model of a general convex or non objective machine machine objectives.
We propose an algorithm that solves a constraint with gradually relaxed point levels of each subproblem.
We demonstrate the effectiveness of our new numerical scale problems.
arXiv Detail & Related papers (2020-10-23T05:24:05Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z) - Zeroth-Order Algorithms for Nonconvex Minimax Problems with Improved
Complexities [21.13934071954103]
We present a deterministic algorithm for non-in one-text variable Descent strongly-concave in the other.
We show that under the SGC assumption, the complexities of the algorithms match that of existing algorithms.
Results are presented in terms of oracle-texttZO-GDMSA and Numerical experiments are presented to support theoretical results.
arXiv Detail & Related papers (2020-01-22T00:05:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.