Solving optimization problems with Blackwell approachability
- URL: http://arxiv.org/abs/2202.12277v1
- Date: Thu, 24 Feb 2022 18:19:21 GMT
- Title: Solving optimization problems with Blackwell approachability
- Authors: Julien Grand-Cl\'ement and Christian Kroer
- Abstract summary: Conic Blackwell Algorithm$+$ (CBA$+$) regret minimizer.
CBA$+$ is based on Blackwell approachability and attains $O(sqrtT)$ regret.
Based on CBA$+$, we introduce SP-CBA$+$, a new parameter-free algorithm for solving convex-concave saddle-point problems.
- Score: 31.29341288414507
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce the Conic Blackwell Algorithm$^+$ (CBA$^+$) regret minimizer, a
new parameter- and scale-free regret minimizer for general convex sets. CBA$^+$
is based on Blackwell approachability and attains $O(\sqrt{T})$ regret. We show
how to efficiently instantiate CBA$^+$ for many decision sets of interest,
including the simplex, $\ell_{p}$ norm balls, and ellipsoidal confidence
regions in the simplex. Based on CBA$^+$, we introduce SP-CBA$^+$, a new
parameter-free algorithm for solving convex-concave saddle-point problems,
which achieves a $O(1/\sqrt{T})$ ergodic rate of convergence. In our
simulations, we demonstrate the wide applicability of SP-CBA$^+$ on several
standard saddle-point problems, including matrix games, extensive-form games,
distributionally robust logistic regression, and Markov decision processes. In
each setting, SP-CBA$^+$ achieves state-of-the-art numerical performance, and
outperforms classical methods, without the need for any choice of step sizes or
other algorithmic parameters.
Related papers
- Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum
Minimization [52.25843977506935]
We propose an adaptive variance method, called AdaSpider, for $L$-smooth, non-reduction functions with a finitesum structure.
In doing so, we are able to compute an $epsilon-stationary point with $tildeOleft + st/epsilon calls.
arXiv Detail & Related papers (2022-11-03T14:41:46Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Towards Painless Policy Optimization for Constrained MDPs [46.12526917024248]
We study policy optimization in an infinite horizon, $gamma$-discounted constrained Markov decision process (CMDP)
Our objective is to return a policy that achieves large expected reward with a small constraint violation.
We propose a generic primal-dual framework that allows us to bound the reward sub-optimality and constraint violation for arbitrary algorithms.
arXiv Detail & Related papers (2022-04-11T15:08:09Z) - Conic Blackwell Algorithm: Parameter-Free Convex-Concave Saddle-Point
Solving [21.496093093434357]
We develop a new simple regret minimizer, the Conic Blackwell Algorithm$+$ (CBA$+$)
We show how to implement CBA$+$ for the simplex, $ell_p$ norm balls, and ellipsoidal confidence regions in the simplex.
We present numerical experiments for solving matrix games and distributionally robust optimization problems.
arXiv Detail & Related papers (2021-05-27T14:50:31Z) - Lazy OCO: Online Convex Optimization on a Switching Budget [34.936641201844054]
We study a variant of online convex optimization where the player is permitted to switch decisions at most $S$ times in expectation throughout $T$ rounds.
Similar problems have been addressed in prior work for the discrete decision set setting, and more recently in the continuous setting but only with an adaptive adversary.
arXiv Detail & Related papers (2021-02-07T14:47:19Z) - Efficient algorithms for multivariate shape-constrained convex
regression problems [9.281671380673306]
We prove that the least squares estimator is computable via solving a constrained convex programming (QP) problem with $(n+1)d$ variables and at least $n(n-1)$ linear inequality constraints.
For solving the generally very large-scale convex QP, we design two efficient algorithms, one is the symmetric Gauss-Seidel based alternating direction method of multipliers (tt sGS-ADMM), and the other is the proximal augmented Lagrangian method (tt pALM) with the subproblems solved by the semismooth Newton method (t
arXiv Detail & Related papers (2020-02-26T11:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.