Faster Accelerated First-order Methods for Convex Optimization with Strongly Convex Function Constraints
- URL: http://arxiv.org/abs/2212.11143v4
- Date: Wed, 27 Nov 2024 04:54:18 GMT
- Title: Faster Accelerated First-order Methods for Convex Optimization with Strongly Convex Function Constraints
- Authors: Zhenwei Lin, Qi Deng,
- Abstract summary: We introduce faster accelerated primal-dual algorithms for minimizing a convex function subject to strongly convex function constraints.
We show the superior performance our methods in sparsity-inducing optimization, notably Google's personalized PageRank problem.
- Score: 3.1044138971639734
- License:
- Abstract: In this paper, we introduce faster accelerated primal-dual algorithms for minimizing a convex function subject to strongly convex function constraints. Prior to our work, the best complexity bound was $\mathcal{O}(1/{\varepsilon})$, regardless of the strong convexity of the constraint function. It is unclear whether the strong convexity assumption can enable even better convergence results. To address this issue, we have developed novel techniques to progressively estimate the strong convexity of the Lagrangian function. Our approach, for the first time, effectively leverages the constraint strong convexity, obtaining an improved complexity of $\mathcal{O}(1/\sqrt{\varepsilon})$. This rate matches the complexity lower bound for strongly-convex-concave saddle point optimization and is therefore order-optimal. We show the superior performance of our methods in sparsity-inducing constrained optimization, notably Google's personalized PageRank problem. Furthermore, we show that a restarted version of the proposed methods can effectively identify the optimal solution's sparsity pattern within a finite number of steps, a result that appears to have independent significance.
Related papers
- Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - An Accelerated Gradient Method for Convex Smooth Simple Bilevel Optimization [16.709026203727007]
We present a novel bilevel optimization method that locally approximates the solution set of the lower-level problem.
We measure the performance of our method in terms of suboptimality and infeasibility errors.
arXiv Detail & Related papers (2024-02-12T22:34:53Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Efficient Algorithms for High-Dimensional Convex Subspace Optimization
via Strict Complementarity [19.24470467199451]
We consider optimization problems in which the goal is find a $k$ subspace of $realsn$, $k$, which minimizes a convex smooth loss.
While this problem is highly in high-dimensional regimes, it is difficult to find a global optimal solution.
In this paper we present a natural.
determinate optimal dimension relaxation for which convergence to the.
global is straightforward.
arXiv Detail & Related papers (2022-02-08T17:36:43Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - On the Oracle Complexity of Higher-Order Smooth Non-Convex Finite-Sum
Optimization [1.6973426830397942]
We prove lower bounds for higher-order methods in smooth non finite-sum optimization.
We show that a pth-order regularized method cannot profit from the finitesum objective.
We show that a new second-order smoothness assumption can be seen as an analogue to the first-order mean-squared smoothness.
arXiv Detail & Related papers (2021-03-08T23:33:58Z) - Lower Bounds and Accelerated Algorithms for Bilevel Optimization [62.192297758346484]
Bilevel optimization has recently attracted growing interests due to its wide applications in modern machine learning problems.
We show that our results are more challenging than that of minimax applications.
arXiv Detail & Related papers (2021-02-07T21:46:29Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained
Optimization [25.73397307080647]
We present a new model of a general convex or non objective machine machine objectives.
We propose an algorithm that solves a constraint with gradually relaxed point levels of each subproblem.
We demonstrate the effectiveness of our new numerical scale problems.
arXiv Detail & Related papers (2020-10-23T05:24:05Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.