Local Search Algorithms for Rank-Constrained Convex Optimization
- URL: http://arxiv.org/abs/2101.06262v1
- Date: Fri, 15 Jan 2021 18:52:02 GMT
- Title: Local Search Algorithms for Rank-Constrained Convex Optimization
- Authors: Kyriakos Axiotis and Maxim Sviridenko
- Abstract summary: We propose greedy and local search algorithms for rank-constrained convex optimization.
We show that if the rank-restricted condition number of $R$ is $kappa$, a solution $A$ with rank $O(r*cdot minkappa log fracR(mathbf0)-R(A*)epsilon, kappa2)$ and $R(A) leq R(A*) + epsilon$ can be recovered, where $A
- Score: 7.736462653684946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose greedy and local search algorithms for rank-constrained convex
optimization, namely solving $\underset{\mathrm{rank}(A)\leq r^*}{\min}\, R(A)$
given a convex function $R:\mathbb{R}^{m\times n}\rightarrow \mathbb{R}$ and a
parameter $r^*$. These algorithms consist of repeating two steps: (a) adding a
new rank-1 matrix to $A$ and (b) enforcing the rank constraint on $A$. We
refine and improve the theoretical analysis of Shalev-Shwartz et al. (2011),
and show that if the rank-restricted condition number of $R$ is $\kappa$, a
solution $A$ with rank $O(r^*\cdot \min\{\kappa \log
\frac{R(\mathbf{0})-R(A^*)}{\epsilon}, \kappa^2\})$ and $R(A) \leq R(A^*) +
\epsilon$ can be recovered, where $A^*$ is the optimal solution. This
significantly generalizes associated results on sparse convex optimization, as
well as rank-constrained convex optimization for smooth functions. We then
introduce new practical variants of these algorithms that have superior runtime
and recover better solutions in practice. We demonstrate the versatility of
these methods on a wide range of applications involving matrix completion and
robust principal component analysis.
Related papers
- Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Memory-Constrained Algorithms for Convex Optimization via Recursive
Cutting-Planes [23.94542304111204]
First class of algorithms that provides a positive trade-off between gradient descent and cutting-plane methods in any regime with $epsilonleq 1/sqrt d$.
In the regime $epsilon leq d-Omega(d)$, our algorithm with $p=d$ achieves the information-theoretic optimal memory usage and improves the oracle-complexity of gradient descent.
arXiv Detail & Related papers (2023-06-16T17:00:51Z) - An Optimal Stochastic Algorithm for Decentralized Nonconvex Finite-sum
Optimization [25.21457349137344]
We show a proof to show DEAREST requires at most $mathcal O(+sqrtmnLvarepsilon-2)$ first-order oracle (IFO) calls and $mathcal O(Lvarepsilon-2/sqrt1-lambda_W)$ communication rounds.
arXiv Detail & Related papers (2022-10-25T11:37:11Z) - Decomposable Non-Smooth Convex Optimization with Nearly-Linear Gradient
Oracle Complexity [15.18055488087588]
We give an algorithm that minimizes the above convex formulation to $epsilon$-accuracy in $widetildeO(sum_i=1n d_i log (1 /epsilon))$ gradient computations.
Our main technical contribution is an adaptive procedure to select an $f_i$ term at every iteration via a novel combination of cutting-plane and interior-point methods.
arXiv Detail & Related papers (2022-08-07T20:53:42Z) - The First Optimal Algorithm for Smooth and
Strongly-Convex-Strongly-Concave Minimax Optimization [88.91190483500932]
In this paper, we revisit the smooth and strongly-strongly-concave minimax optimization problem.
Existing state-of-the-art methods do not match lower bound $Omegaleft(sqrtkappa_xkappa_ylog3 (kappa_xkappa_y)logfrac1epsilonright)$.
We fix this fundamental issue by providing the first algorithm with $mathcalOleft( sqrtkappa_xkappa_ylog
arXiv Detail & Related papers (2022-05-11T17:33:07Z) - Iterative Hard Thresholding with Adaptive Regularization: Sparser
Solutions Without Sacrificing Runtime [17.60502131429094]
We propose a simple modification to the iterative hard thresholding algorithm, which recoversally sparser solutions as a function of the condition number.
Our proposed algorithm, regularized IHT, returns a solution with sparsity $O(skappa)$.
Our algorithm significantly improves over ARHT which also finds a solution of sparsity $O(skappa)$.
arXiv Detail & Related papers (2022-04-11T19:33:15Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Sparse Convex Optimization via Adaptively Regularized Hard Thresholding [17.60502131429094]
We present a new Adaptively Regularized Hard Thresholding (ARHT) algorithm that makes significant progress on this problem.
We also provide a new analysis of OMP with Replacement (OMPR) for general $f$, under the condition $s > s* frackappa24$.
arXiv Detail & Related papers (2020-06-25T17:16:21Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.