An Improved Cutting Plane Method for Convex Optimization, Convex-Concave
Games and its Applications
- URL: http://arxiv.org/abs/2004.04250v1
- Date: Wed, 8 Apr 2020 20:56:40 GMT
- Title: An Improved Cutting Plane Method for Convex Optimization, Convex-Concave
Games and its Applications
- Authors: Haotian Jiang, Yin Tat Lee, Zhao Song, Sam Chiu-wai Wong
- Abstract summary: We propose a new cutting plane algorithm that uses an optimal $O(n log (kappa))$ evaluations of the oracle.
We also provide evidence that the $n2$ time per evaluation cannot be improved and thus our running time is optimal.
- Score: 28.54236118020831
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Given a separation oracle for a convex set $K \subset \mathbb{R}^n$ that is
contained in a box of radius $R$, the goal is to either compute a point in $K$
or prove that $K$ does not contain a ball of radius $\epsilon$. We propose a
new cutting plane algorithm that uses an optimal $O(n \log (\kappa))$
evaluations of the oracle and an additional $O(n^2)$ time per evaluation, where
$\kappa = nR/\epsilon$.
$\bullet$ This improves upon Vaidya's $O( \text{SO} \cdot n \log (\kappa) +
n^{\omega+1} \log (\kappa))$ time algorithm [Vaidya, FOCS 1989a] in terms of
polynomial dependence on $n$, where $\omega < 2.373$ is the exponent of matrix
multiplication and $\text{SO}$ is the time for oracle evaluation.
$\bullet$ This improves upon Lee-Sidford-Wong's $O( \text{SO} \cdot n \log
(\kappa) + n^3 \log^{O(1)} (\kappa))$ time algorithm [Lee, Sidford and Wong,
FOCS 2015] in terms of dependence on $\kappa$.
For many important applications in economics, $\kappa = \Omega(\exp(n))$ and
this leads to a significant difference between $\log(\kappa)$ and
$\mathrm{poly}(\log (\kappa))$. We also provide evidence that the $n^2$ time
per evaluation cannot be improved and thus our running time is optimal.
A bottleneck of previous cutting plane methods is to compute leverage scores,
a measure of the relative importance of past constraints. Our result is
achieved by a novel multi-layered data structure for leverage score
maintenance, which is a sophisticated combination of diverse techniques such as
random projection, batched low-rank update, inverse maintenance, polynomial
interpolation, and fast rectangular matrix multiplication. Interestingly, our
method requires a combination of different fast rectangular matrix
multiplication algorithms.
Related papers
- A shortcut to an optimal quantum linear system solver [55.2480439325792]
We give a conceptually simple quantum linear system solvers (QLSS) that does not use complex or difficult-to-analyze techniques.
If the solution norm $lVertboldsymbolxrVert$ is known exactly, our QLSS requires only a single application of kernel.
Alternatively, by reintroducing a concept from the adiabatic path-following technique, we show that $O(kappa)$ complexity can be achieved for norm estimation.
arXiv Detail & Related papers (2024-06-17T20:54:11Z) - Faster Stochastic Algorithms for Minimax Optimization under
Polyak--{\L}ojasiewicz Conditions [12.459354707528819]
We propose SPIDER-GDA for solving the finite-sum problem of the form $min_x max_y f(x,y)triqangle frac1n sum_i=1n f_i(x,y)$.
We prove SPIDER-GDA could find an $epsilon$-optimal solution within $mathcal Oleft((n + sqrtn,kappa_xkappa_y2)log (1/epsilon)
arXiv Detail & Related papers (2023-07-29T02:26:31Z) - An Optimal Algorithm for Strongly Convex Min-min Optimization [79.11017157526815]
Existing optimal first-order methods require $mathcalO(sqrtmaxkappa_x,kappa_y log 1/epsilon)$ of computations of both $nabla_x f(x,y)$ and $nabla_y f(x,y)$.
We propose a new algorithm that only requires $mathcalO(sqrtkappa_x log 1/epsilon)$ of computations of $nabla_x f(x,
arXiv Detail & Related papers (2022-12-29T19:26:12Z) - Near-Optimal Regret Bounds for Multi-batch Reinforcement Learning [54.806166861456035]
We study the episodic reinforcement learning (RL) problem modeled by finite-horizon Markov Decision Processes (MDPs) with constraint on the number of batches.
We design a computational efficient algorithm to achieve near-optimal regret of $tildeO(sqrtSAH3Kln (1/delta))$tildeO(cdot) hides logarithmic terms of $(S,A,H,K)$ in $K$ episodes.
Our technical contribution are two-fold: 1) a near-optimal design scheme to explore
arXiv Detail & Related papers (2022-10-15T09:22:22Z) - Nearly Optimal Policy Optimization with Stable at Any Time Guarantee [53.155554415415445]
Policy-based method in citetshani 2020optimistic is only $tildeO(sqrtSAH3K + sqrtAH4K)$ where $S$ is the number of states, $A$ is the number of actions, $H$ is the horizon, and $K$ is the number of episodes, and there is a $sqrtSH$ gap compared with the information theoretic lower bound $tildeOmega(sqrtSAH
arXiv Detail & Related papers (2021-12-21T01:54:17Z) - On the Power of Multitask Representation Learning in Linear MDP [61.58929164172968]
This paper presents analyses for the statistical benefit of multitask representation learning in linear Markov Decision Process (MDP)
We first discover a emphLeast-Activated-Feature-Abundance (LAFA) criterion, denoted as $kappa$, with which we prove that a straightforward least-square algorithm learns a policy which is $tildeO(H2sqrtfrackappa mathcalC(Phi)2 kappa dNT+frackappa dn)
arXiv Detail & Related papers (2021-06-15T11:21:06Z) - Faster quantum-inspired algorithms for solving linear systems [0.05076419064097732]
We show that there is a classical algorithm that outputs a data structure for $x$ allowing sampling and querying to the entries.
This output can be viewed as a classical analogue to the output of quantum linear solvers.
arXiv Detail & Related papers (2021-03-18T15:12:44Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - Improved quantum algorithm for A-optimal projection [4.248054546466641]
This paper corrects the time complexity of Duan emphet al.'s algorithm to $(frackappa4ssqrtks epsilonsmathrmpolylog)$.
Our algorithm achieves at least a speedup compared to Duan emphet al.'s algorithm.
arXiv Detail & Related papers (2020-06-10T09:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.