Efficient Algorithms for High-Dimensional Convex Subspace Optimization
via Strict Complementarity
- URL: http://arxiv.org/abs/2202.04020v1
- Date: Tue, 8 Feb 2022 17:36:43 GMT
- Title: Efficient Algorithms for High-Dimensional Convex Subspace Optimization
via Strict Complementarity
- Authors: Dan Garber, Ron Fisher
- Abstract summary: We consider optimization problems in which the goal is find a $k$ subspace of $realsn$, $k$, which minimizes a convex smooth loss.
While this problem is highly in high-dimensional regimes, it is difficult to find a global optimal solution.
In this paper we present a natural.
determinate optimal dimension relaxation for which convergence to the.
global is straightforward.
- Score: 19.24470467199451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider optimization problems in which the goal is find a $k$-dimensional
subspace of $\reals^n$, $k<<n$, which minimizes a convex and smooth loss. Such
problemsgeneralize the fundamental task of principal component analysis (PCA)
to include robust and sparse counterparts, and logistic PCA for binary data,
among others. While this problem is not convex it admits natural algorithms
with very efficient iterations and memory requirements, which is highly desired
in high-dimensional regimes however, arguing about their fast convergence to a
global optimal solution is difficult. On the other hand, there exists a simple
convex relaxation for which convergence to the global optimum is
straightforward, however corresponding algorithms are not efficient when the
dimension is very large. In this work we present a natural deterministic
sufficient condition so that the optimal solution to the convex relaxation is
unique and is also the optimal solution to the original nonconvex problem.
Mainly, we prove that under this condition, a natural highly-efficient
nonconvex gradient method, which we refer to as \textit{gradient orthogonal
iteration}, when initialized with a "warm-start", converges linearly for the
nonconvex problem. We also establish similar results for the nonconvex
projected gradient method, and the Frank-Wolfe method when applied to the
convex relaxation. We conclude with empirical evidence on synthetic data which
demonstrate the appeal of our approach.
Related papers
- Low-Rank Extragradient Methods for Scalable Semidefinite Optimization [0.0]
We focus on high-dimensional and plausible settings in which the problem admits a low-rank solution.
We provide several theoretical results proving that, under these circumstances, the well-known Extragradient method converges to a solution of the constrained optimization problem.
arXiv Detail & Related papers (2024-02-14T10:48:00Z) - A simple uniformly optimal method without line search for convex optimization [9.280355951055865]
We show that line search is superfluous in attaining the optimal rate of convergence for solving a convex optimization problem whose parameters are not given a priori.
We present a novel accelerated gradient descent type algorithm called AC-FGM that can achieve an optimal $mathcalO (1/k2)$ rate of convergence for smooth convex optimization.
arXiv Detail & Related papers (2023-10-16T05:26:03Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Efficient First-order Methods for Convex Optimization with Strongly
Convex Function Constraints [3.667453772837954]
We show how to minimize a convex function subject to strongly convex function constraints.
We identify the sparsity pattern within a finite number result that appears to have independent significance.
arXiv Detail & Related papers (2022-12-21T16:04:53Z) - A conditional gradient homotopy method with applications to Semidefinite
Programming [1.6369790794838281]
homotopy-based conditional gradient method for solving convex optimization problems with a large number of simple conic constraints.
Our theoretical complexity is competitive when confronted to state-of-the-art SDP, with the decisive advantage of cheap projection-frees.
arXiv Detail & Related papers (2022-07-07T05:48:27Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained
Optimization [25.73397307080647]
We present a new model of a general convex or non objective machine machine objectives.
We propose an algorithm that solves a constraint with gradually relaxed point levels of each subproblem.
We demonstrate the effectiveness of our new numerical scale problems.
arXiv Detail & Related papers (2020-10-23T05:24:05Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.