Gradient-Variation Bound for Online Convex Optimization with Constraints
- URL: http://arxiv.org/abs/2006.12455v2
- Date: Tue, 23 Aug 2022 13:12:53 GMT
- Title: Gradient-Variation Bound for Online Convex Optimization with Constraints
- Authors: Shuang Qiu, Xiaohan Wei, Mladen Kolar
- Abstract summary: We study online convex optimization with constraints consisting of multiple functional constraints and a relatively simple constraint set, such as a Euclidean ball.
First-order methods achieve an $mathcalO(sqrtT)$ regret and an $mathcalO(1)$ constraint violation, but do not take into account the structural information of the problem.
In this paper, we provide an emphinstance-dependent bound for online convex optimization with complex constraints obtained by a novel online primal-dual mirror-prox algorithm.
- Score: 25.002868073267464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study online convex optimization with constraints consisting of multiple
functional constraints and a relatively simple constraint set, such as a
Euclidean ball. As enforcing the constraints at each time step through
projections is computationally challenging in general, we allow decisions to
violate the functional constraints but aim to achieve a low regret and
cumulative violation of the constraints over a horizon of $T$ time steps.
First-order methods achieve an $\mathcal{O}(\sqrt{T})$ regret and an
$\mathcal{O}(1)$ constraint violation, which is the best-known bound, but do
not take into account the structural information of the problem. Furthermore,
the existing algorithms and analysis are limited to Euclidean space. In this
paper, we provide an \emph{instance-dependent} bound for online convex
optimization with complex constraints obtained by a novel online primal-dual
mirror-prox algorithm. Our instance-dependent regret is quantified by the total
gradient variation $V_*(T)$ in the sequence of loss functions. The proposed
algorithm works in \emph{general} non-Euclidean spaces and simultaneously
achieves an $\mathcal{O}(\sqrt{V_*(T)})$ regret and an $\mathcal{O}(1)$
constraint violation, which is never worse than the best-known $(
\mathcal{O}(\sqrt{T}), \mathcal{O}(1) )$ result and improves over previous
works that applied mirror-prox-type algorithms for this problem achieving
$\mathcal{O}(T^{2/3})$ regret and constraint violation. Finally, our algorithm
is computationally efficient, as it only performs mirror descent steps in each
iteration instead of solving a general Lagrangian minimization problem.
Related papers
- Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - A Block Coordinate Descent Method for Nonsmooth Composite Optimization
under Orthogonality Constraints [7.716156977428555]
Nonsmooth composite optimization with generality constraints has a broad spectrum of applications in statistical learning and data science.
textittextbfOBCD is a new Block Coordinate method for solving nonsmooth composite problems.
arXiv Detail & Related papers (2023-04-07T13:44:59Z) - Infeasible Deterministic, Stochastic, and Variance-Reduction Algorithms for Optimization under Orthogonality Constraints [9.301728976515255]
This article provides new practical and theoretical developments for the landing algorithm.
First, the method is extended to the Stiefel manifold.
We also consider variance reduction algorithms when the cost function is an average of many functions.
arXiv Detail & Related papers (2023-03-29T07:36:54Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Private Isotonic Regression [54.32252900997422]
We consider the problem of isotonic regression over a partially ordered set (poset) $mathcalX$ and for any Lipschitz loss function.
We obtain a pure-DP algorithm that has an expected excess empirical risk of roughly $mathrmwidth(mathcalX) cdot log|mathcalX| / n$, where $mathrmwidth(mathcalX)$ is the width of the poset.
We show that the bounds above are essentially the best that can be
arXiv Detail & Related papers (2022-10-27T05:08:07Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Regret and Cumulative Constraint Violation Analysis for Online Convex
Optimization with Long Term Constraints [24.97580261894342]
This paper considers online convex optimization with long term constraints, where constraints can be violated in intermediate rounds, but need to be satisfied in the long run.
A novel algorithm is first proposed and it achieves an $mathcalO(Tmaxc,1-c)$ bound for static regret and an $mathcalO(T(1-c)/2)$ bound for cumulative constraint violation.
arXiv Detail & Related papers (2021-06-09T15:18:06Z) - An Efficient Pessimistic-Optimistic Algorithm for Constrained Linear
Bandits [16.103685478563072]
This paper considers linear bandits with general constraints.
The objective is to maximize the expected cumulative reward over horizon $T$ subject to a set of constraints in each round.
We propose a pessimistic-optimistic algorithm for this problem, which is efficient in two aspects.
arXiv Detail & Related papers (2021-02-10T07:30:37Z) - Second-order Conditional Gradient Sliding [79.66739383117232]
We present the emphSecond-Order Conditional Gradient Sliding (SOCGS) algorithm.
The SOCGS algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations.
It is useful when the feasible region can only be accessed efficiently through a linear optimization oracle.
arXiv Detail & Related papers (2020-02-20T17:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.