Constrained Optimization Involving Nonconvex $\ell_p$ Norms: Optimality
Conditions, Algorithm and Convergence
- URL: http://arxiv.org/abs/2110.14127v1
- Date: Wed, 27 Oct 2021 02:17:42 GMT
- Title: Constrained Optimization Involving Nonconvex $\ell_p$ Norms: Optimality
Conditions, Algorithm and Convergence
- Authors: Hao Wang, Yining Gao, Jiashan Wang, Hongying Liu
- Abstract summary: We provide the calculation of the subgradients of the $ell_p$ norm and the normal cones of the $ell_p$ ball.
We show that the sequential optimality conditions can be easily satisfied for iteratively reweighted algorithms.
- Score: 4.886985244621422
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the optimality conditions for characterizing the
local minimizers of the constrained optimization problems involving an $\ell_p$
norm ($0<p<1$) of the variables, which may appear in either the objective or
the constraint. This kind of problems have strong applicability to a wide range
of areas since usually the $\ell_p$ norm can promote sparse solutions. However,
the nonsmooth and non-Lipschtiz nature of the $\ell_p$ norm often cause these
problems difficult to analyze and solve. We provide the calculation of the
subgradients of the $\ell_p$ norm and the normal cones of the $\ell_p$ ball.
For both problems, we derive the first-order necessary conditions under various
constraint qualifications. We also derive the sequential optimality conditions
for both problems and study the conditions under which these conditions imply
the first-order necessary conditions. We point out that the sequential
optimality conditions can be easily satisfied for iteratively reweighted
algorithms and show that the global convergence can be easily derived using
sequential optimality conditions.
Related papers
- SQ Lower Bounds for Non-Gaussian Component Analysis with Weaker
Assumptions [50.20087216230159]
We study the complexity of Non-Gaussian Component Analysis (NGCA) in the Statistical Query model.
We prove near-optimal SQ lower bounds for NGCA under the moment-matching condition only.
arXiv Detail & Related papers (2024-03-07T18:49:32Z) - First-order optimality conditions for non-commutative optimization
problems [0.0]
We derive, via small variations on the problem variables, state and operator optimality conditions.
Operator optimality conditions are the non-commutative analogs of the Karush-Kuhn-Tucker conditions.
We prove that a weak form of non-commutative operator optimality holds for all Archimedean NPO problems.
arXiv Detail & Related papers (2023-11-30T17:00:06Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Universal Gradient Descent Ascent Method for Nonconvex-Nonconcave
Minimax Optimization [22.392563619845212]
Non-nonconcave minimax optimization has received intense attention the last decade due to its broad applications in machine learning.
We propose a novel applicable single-loop, dual algorithm, which is able to balance uniformly and doubly the gradient.
Specifically, under the one-sided KL condition with exponent $thetain(0,1)$, DS-GDA converges with an $mathcalO(eps-2$2$1) results.
arXiv Detail & Related papers (2022-12-26T00:28:07Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Randomized Coordinate Subgradient Method for Nonsmooth Composite
Optimization [11.017632675093628]
Coordinate-type subgradient methods for addressing nonsmooth problems are relatively underexplored due to the set of properties of the Lipschitz-type assumption.
arXiv Detail & Related papers (2022-06-30T02:17:11Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Adaptive extra-gradient methods for min-max optimization and games [35.02879452114223]
We present a new family of minmax optimization algorithms that automatically exploit the geometry of the gradient data observed at earlier iterations.
Thanks to this adaptation mechanism, the proposed method automatically detects whether the problem is smooth or not.
It converges to an $varepsilon$-optimal solution within $mathcalO (1/varepsilon)$ iterations in smooth problems, and within $mathcalO (1/varepsilon)$ iterations in non-smooth ones.
arXiv Detail & Related papers (2020-10-22T22:54:54Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.