Memory-Query Tradeoffs for Randomized Convex Optimization
- URL: http://arxiv.org/abs/2306.12534v1
- Date: Wed, 21 Jun 2023 19:48:58 GMT
- Title: Memory-Query Tradeoffs for Randomized Convex Optimization
- Authors: Xi Chen and Binghui Peng
- Abstract summary: We show that any randomized first-order algorithm which minimizes a $d$-dimensional, $1$-Lipschitz convex function over the unit ball must either use $Omega(d2-delta)$ bits of memory or make $Omega(d1+delta/6-o(1))$ queries.
- Score: 16.225462097812766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show that any randomized first-order algorithm which minimizes a
$d$-dimensional, $1$-Lipschitz convex function over the unit ball must either
use $\Omega(d^{2-\delta})$ bits of memory or make $\Omega(d^{1+\delta/6-o(1)})$
queries, for any constant $\delta\in (0,1)$ and when the precision $\epsilon$
is quasipolynomially small in $d$. Our result implies that cutting plane
methods, which use $\tilde{O}(d^2)$ bits of memory and $\tilde{O}(d)$ queries,
are Pareto-optimal among randomized first-order algorithms, and quadratic
memory is required to achieve optimal query complexity for convex optimization.
Related papers
- Dueling Optimization with a Monotone Adversary [35.850072415395395]
We study the problem of dueling optimization with a monotone adversary, which is a generalization of (noiseless) dueling convex optimization.
The goal is to design an online algorithm to find a minimizer $mathbfx*$ for a function $fcolon X to mathbbRd.
arXiv Detail & Related papers (2023-11-18T23:55:59Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Memory-Constrained Algorithms for Convex Optimization via Recursive
Cutting-Planes [23.94542304111204]
First class of algorithms that provides a positive trade-off between gradient descent and cutting-plane methods in any regime with $epsilonleq 1/sqrt d$.
In the regime $epsilon leq d-Omega(d)$, our algorithm with $p=d$ achieves the information-theoretic optimal memory usage and improves the oracle-complexity of gradient descent.
arXiv Detail & Related papers (2023-06-16T17:00:51Z) - Quadratic Memory is Necessary for Optimal Query Complexity in Convex
Optimization: Center-of-Mass is Pareto-Optimal [23.94542304111204]
We show that quadratic memory is necessary to achieve the optimal oracle complexity for first-order convex optimization.
We prove that to minimize $1$-Lipschitz convex functions over the unit ball to $1/d4$ accuracy, any deterministic first-order algorithms using at most $d2-delta$ bits of memory must make $tildeOmega(d1+delta/3)$ queries.
arXiv Detail & Related papers (2023-02-09T22:37:27Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Efficient Convex Optimization Requires Superlinear Memory [27.11113888243391]
We show that any memory-constrained, first-order algorithm which minimizes $d$-dimensional, $1T-Lipschitz convex functions over the unit ball to $1/mathrmpoly(d)$ accuracy using at most $d1.25 - delta$ bits of memory must make at least $tildeOmega(d1 + (4/3)delta)$ first-order queries.
arXiv Detail & Related papers (2022-03-29T06:15:54Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.