Memory-Constrained Algorithms for Convex Optimization via Recursive
Cutting-Planes
- URL: http://arxiv.org/abs/2306.10096v1
- Date: Fri, 16 Jun 2023 17:00:51 GMT
- Title: Memory-Constrained Algorithms for Convex Optimization via Recursive
Cutting-Planes
- Authors: Mo\"ise Blanchard, Junhui Zhang, Patrick Jaillet
- Abstract summary: First class of algorithms that provides a positive trade-off between gradient descent and cutting-plane methods in any regime with $epsilonleq 1/sqrt d$.
In the regime $epsilon leq d-Omega(d)$, our algorithm with $p=d$ achieves the information-theoretic optimal memory usage and improves the oracle-complexity of gradient descent.
- Score: 23.94542304111204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a family of recursive cutting-plane algorithms to solve
feasibility problems with constrained memory, which can also be used for
first-order convex optimization. Precisely, in order to find a point within a
ball of radius $\epsilon$ with a separation oracle in dimension $d$ -- or to
minimize $1$-Lipschitz convex functions to accuracy $\epsilon$ over the unit
ball -- our algorithms use $\mathcal O(\frac{d^2}{p}\ln \frac{1}{\epsilon})$
bits of memory, and make $\mathcal O((C\frac{d}{p}\ln \frac{1}{\epsilon})^p)$
oracle calls, for some universal constant $C \geq 1$. The family is
parametrized by $p\in[d]$ and provides an oracle-complexity/memory trade-off in
the sub-polynomial regime $\ln\frac{1}{\epsilon}\gg\ln d$. While several works
gave lower-bound trade-offs (impossibility results) -- we explicit here their
dependence with $\ln\frac{1}{\epsilon}$, showing that these also hold in any
sub-polynomial regime -- to the best of our knowledge this is the first class
of algorithms that provides a positive trade-off between gradient descent and
cutting-plane methods in any regime with $\epsilon\leq 1/\sqrt d$. The
algorithms divide the $d$ variables into $p$ blocks and optimize over blocks
sequentially, with approximate separation vectors constructed using a variant
of Vaidya's method. In the regime $\epsilon \leq d^{-\Omega(d)}$, our algorithm
with $p=d$ achieves the information-theoretic optimal memory usage and improves
the oracle-complexity of gradient descent.
Related papers
- Gradient Descent is Pareto-Optimal in the Oracle Complexity and Memory Tradeoff for Feasibility Problems [0.0]
We show that to solve feasibility problems with accuracy any deterministic algorithm either uses $d1+delta$ bits of memory or must make at least $1/(d0.01delta epsilon2frac1-delta1+1.01 delta-o(1))$ oracle queries.
We also show that randomized algorithms either use $d1+delta$ memory or make at least $1/(d2delta epsilon2(1-4delta)-o(1))$ queries for any $deltain
arXiv Detail & Related papers (2024-04-10T04:15:50Z) - A Whole New Ball Game: A Primal Accelerated Method for Matrix Games and
Minimizing the Maximum of Smooth Functions [44.655316553524855]
We design algorithms for minimizing $max_iin[n] f_i(x) over a $d$-dimensional Euclidean or simplex domain.
When each $f_i$ is $1$-Lipschitz and $1$-smooth, our method computes an $epsilon-approximate solution.
arXiv Detail & Related papers (2023-11-17T22:07:18Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Memory-Query Tradeoffs for Randomized Convex Optimization [16.225462097812766]
We show that any randomized first-order algorithm which minimizes a $d$-dimensional, $1$-Lipschitz convex function over the unit ball must either use $Omega(d2-delta)$ bits of memory or make $Omega(d1+delta/6-o(1))$ queries.
arXiv Detail & Related papers (2023-06-21T19:48:58Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Decomposable Non-Smooth Convex Optimization with Nearly-Linear Gradient
Oracle Complexity [15.18055488087588]
We give an algorithm that minimizes the above convex formulation to $epsilon$-accuracy in $widetildeO(sum_i=1n d_i log (1 /epsilon))$ gradient computations.
Our main technical contribution is an adaptive procedure to select an $f_i$ term at every iteration via a novel combination of cutting-plane and interior-point methods.
arXiv Detail & Related papers (2022-08-07T20:53:42Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Lifted Primal-Dual Method for Bilinearly Coupled Smooth Minimax
Optimization [47.27237492375659]
We study the bilinearly coupled minimax problem: $min_x max_y f(x) + ytop A x - h(y)$, where $f$ and $h$ are both strongly convex smooth functions.
No known first-order algorithms have hitherto achieved the lower complexity bound of $Omega(sqrtfracL_xmu_x + frac|A|sqrtmu_x,mu_y) log(frac1vareps
arXiv Detail & Related papers (2022-01-19T05:56:19Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.