First-order Methods for Affinely Constrained Composite Non-convex
Non-smooth Problems: Lower Complexity Bound and Near-optimal Methods
- URL: http://arxiv.org/abs/2307.07605v1
- Date: Fri, 14 Jul 2023 19:59:18 GMT
- Title: First-order Methods for Affinely Constrained Composite Non-convex
Non-smooth Problems: Lower Complexity Bound and Near-optimal Methods
- Authors: Wei Liu, Qihang Lin, Yangyang Xu
- Abstract summary: We make the first attempt to establish lower complexity bounds of first-order FOMs for solving a composite non-smooth optimization problem.
We find that our method and the proposed IPG method are almostimprovable.
- Score: 23.948126336842634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many recent studies on first-order methods (FOMs) focus on \emph{composite
non-convex non-smooth} optimization with linear and/or nonlinear function
constraints. Upper (or worst-case) complexity bounds have been established for
these methods. However, little can be claimed about their optimality as no
lower bound is known, except for a few special \emph{smooth non-convex} cases.
In this paper, we make the first attempt to establish lower complexity bounds
of FOMs for solving a class of composite non-convex non-smooth optimization
with linear constraints. Assuming two different first-order oracles, we
establish lower complexity bounds of FOMs to produce a (near)
$\epsilon$-stationary point of a problem (and its reformulation) in the
considered problem class, for any given tolerance $\epsilon>0$. In addition, we
present an inexact proximal gradient (IPG) method by using the more relaxed one
of the two assumed first-order oracles. The oracle complexity of the proposed
IPG, to find a (near) $\epsilon$-stationary point of the considered problem and
its reformulation, matches our established lower bounds up to a logarithmic
factor. Therefore, our lower complexity bounds and the proposed IPG method are
almost non-improvable.
Related papers
- On Linear Convergence in Smooth Convex-Concave Bilinearly-Coupled Saddle-Point Optimization: Lower Bounds and Optimal Algorithms [17.227158587717934]
We revisit the smooth convex-concave bilinearly-coupled saddle-point problem of the form $min_xmax_y f(x) + langle y,mathbfB xrangle - g(y)$.
We develop the first lower complexity bounds and matching optimal linearly converging algorithms for this problem class.
arXiv Detail & Related papers (2024-11-21T22:06:25Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Efficient First-order Methods for Convex Optimization with Strongly
Convex Function Constraints [3.667453772837954]
We show how to minimize a convex function subject to strongly convex function constraints.
We identify the sparsity pattern within a finite number result that appears to have independent significance.
arXiv Detail & Related papers (2022-12-21T16:04:53Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - Optimal Algorithms for Stochastic Complementary Composite Minimization [55.26935605535377]
Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization.
We provide novel excess risk bounds, both in expectation and with high probability.
Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems.
arXiv Detail & Related papers (2022-11-03T12:40:24Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Oracle Complexity in Nonsmooth Nonconvex Optimization [49.088972349825085]
It is well-known that given a smooth, bounded-from-below $$stationary points, Oracle-based methods can find smooth approximation of smoothness.
In this paper, we prove an inherent trade-off between optimization and smoothing dimension.
arXiv Detail & Related papers (2021-04-14T10:42:45Z) - The Complexity of Nonconvex-Strongly-Concave Minimax Optimization [43.07732143522183]
This paper establishes the complexity for finding approximate stationary points of non-strongly-concave (NC-SC) smooth minimax problems.
We deploy a proposed sequence of $Omega-strong$lyconcave sub-2 problems in both general complexity and averaged complexity.
In our proposed finite-sum setting, our proposed algorithm provides a nearly-tight dependence on the condition number.
arXiv Detail & Related papers (2021-03-29T18:53:57Z) - Lower Complexity Bounds of Finite-Sum Optimization Problems: The Results
and Construction [18.65143269806133]
We consider Proximal Incremental First-order (PIFO) algorithms which have access to gradient and proximal oracle for each individual component.
We develop a novel approach for constructing adversarial problems, which partitions the tridiagonal matrix of classical examples into $n$ groups.
arXiv Detail & Related papers (2021-03-15T11:20:31Z) - On the Oracle Complexity of Higher-Order Smooth Non-Convex Finite-Sum
Optimization [1.6973426830397942]
We prove lower bounds for higher-order methods in smooth non finite-sum optimization.
We show that a pth-order regularized method cannot profit from the finitesum objective.
We show that a new second-order smoothness assumption can be seen as an analogue to the first-order mean-squared smoothness.
arXiv Detail & Related papers (2021-03-08T23:33:58Z) - Lower Bounds and Accelerated Algorithms for Bilevel Optimization [62.192297758346484]
Bilevel optimization has recently attracted growing interests due to its wide applications in modern machine learning problems.
We show that our results are more challenging than that of minimax applications.
arXiv Detail & Related papers (2021-02-07T21:46:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.