Efficient First-order Methods for Convex Optimization with Strongly
Convex Function Constraints
- URL: http://arxiv.org/abs/2212.11143v3
- Date: Mon, 6 Nov 2023 02:41:08 GMT
- Title: Efficient First-order Methods for Convex Optimization with Strongly
Convex Function Constraints
- Authors: Zhenwei Lin, Qi Deng
- Abstract summary: We show how to minimize a convex function subject to strongly convex function constraints.
We identify the sparsity pattern within a finite number result that appears to have independent significance.
- Score: 3.667453772837954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce faster first-order primal-dual algorithms for
minimizing a convex function subject to strongly convex function constraints.
Before our work, the best complexity bound was $\mathcal{O}(1/{\varepsilon})$,
and it remains unclear how to improve this result by leveraging the strong
convexity assumption. We address this issue by developing novel techniques to
progressively estimate the strong convexity of the Lagrangian function. Our
approach yields an improved complexity of $\mathcal{O}(1/\sqrt{\varepsilon})$,
matching the complexity lower bound for strongly-convex-concave saddle point
optimization. We show the superior performance of our methods in
sparsity-inducing constrained optimization, notably Google's personalized
PageRank problem. Furthermore, we show that a restarted version of the proposed
methods can effectively identify the sparsity pattern of the optimal solution
within a finite number of steps, a result that appears to have independent
significance.
Related papers
- On Linear Convergence in Smooth Convex-Concave Bilinearly-Coupled Saddle-Point Optimization: Lower Bounds and Optimal Algorithms [17.227158587717934]
We revisit the smooth convex-concave bilinearly-coupled saddle-point problem of the form $min_xmax_y f(x) + langle y,mathbfB xrangle - g(y)$.
We develop the first lower complexity bounds and matching optimal linearly converging algorithms for this problem class.
arXiv Detail & Related papers (2024-11-21T22:06:25Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - An Accelerated Gradient Method for Convex Smooth Simple Bilevel Optimization [16.709026203727007]
We present a novel bilevel optimization method that locally approximates the solution set of the lower-level problem.
We measure the performance of our method in terms of suboptimality and infeasibility errors.
arXiv Detail & Related papers (2024-02-12T22:34:53Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - Efficient Algorithms for High-Dimensional Convex Subspace Optimization
via Strict Complementarity [19.24470467199451]
We consider optimization problems in which the goal is find a $k$ subspace of $realsn$, $k$, which minimizes a convex smooth loss.
While this problem is highly in high-dimensional regimes, it is difficult to find a global optimal solution.
In this paper we present a natural.
determinate optimal dimension relaxation for which convergence to the.
global is straightforward.
arXiv Detail & Related papers (2022-02-08T17:36:43Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.