Order-Optimal Projection-Free Algorithm for Adversarially Constrained Online Convex Optimization
- URL: http://arxiv.org/abs/2502.16744v1
- Date: Sun, 23 Feb 2025 23:18:40 GMT
- Title: Order-Optimal Projection-Free Algorithm for Adversarially Constrained Online Convex Optimization
- Authors: Yiyang Lu, Mohammad Pedramfar, Vaneet Aggarwal,
- Abstract summary: Projection-based algorithms for constrained Online Convex Optimization (COCO) face scalability challenges in high-dimensional settings.<n>This paper introduces a projection-free algorithm for COCO that achieves state-of-the-art performance guarantees while eliminating the need for projections.
- Score: 29.705337940879705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Projection-based algorithms for constrained Online Convex Optimization (COCO) face scalability challenges in high-dimensional settings due to the computational complexity of projecting iterates onto constraint sets. This paper introduces a projection-free algorithm for COCO that achieves state-of-the-art performance guarantees while eliminating the need for projections. By integrating a separation oracle with adaptive Online Gradient Descent (OGD) and employing a Lyapunov-driven surrogate function, while dynamically adjusting step sizes using gradient norms, our method jointly optimizes the regret and cumulative constraint violation (CCV). We also use a blocked version of OGD that helps achieve tradeoffs betweeen the regret and CCV with the number of calls to the separation oracle. For convex cost functions, our algorithm attains an optimal regret of $\mathcal{O}(\sqrt{T})$ and a CCV of $\mathcal{O}(\sqrt{T} \log T)$, matching the best-known projection-based results, while only using $\tilde{\mathcal{O}}({T})$ calls to the separation oracle. The results also demonstrate a tradeoff where lower calls to the separation oracle increase the regret and the CCV. In the strongly convex setting, we further achieve a regret of $\mathcal{O}(\log T)$ and a CCV of $\mathcal{O}(\sqrt{T\log T} )$, while requiring ${\mathcal{O}}({T}^2)$ calls to the separation oracle. Further, tradeoff with the decreasing oracle calls is studied. These results close the gap between projection-free and projection-based approaches, demonstrating that projection-free methods can achieve performance comparable to projection-based counterparts.
Related papers
- Near-Optimal Online Learning for Multi-Agent Submodular Coordination: Tight Approximation and Communication Efficiency [52.60557300927007]
We present a $textbfMA-OSMA$ algorithm to transfer the discrete submodular problem into a continuous optimization.<n>We also introduce a projection-free $textbfMA-OSEA$ algorithm, which effectively utilizes the KL divergence by mixing a uniform distribution.<n>Our algorithms significantly improve the $(frac11+c)$-approximation provided by the state-of-the-art OSG algorithm.
arXiv Detail & Related papers (2025-02-07T15:57:56Z) - Projection-free Algorithms for Online Convex Optimization with Adversarial Constraints [10.047668792033033]
We study a generalization of the Online Convex Optimization (OCO) framework with time-varying adversarial constraints.<n>In this problem, after selecting a feasible action from the convex decision set $X,$ a convex constraint function is revealed alongside the cost function in each round.<n>We propose a *projection-free* online policy which makes a single call to a Linear Program (LP) solver per round.
arXiv Detail & Related papers (2025-01-28T13:04:32Z) - Revisiting Projection-Free Online Learning with Time-Varying Constraints [35.573654458435854]
We investigate constrained online convex optimization, in which decisions must belong to a fixed and typically complicated domain.
Several projection-free methods have been proposed with an $mathcalO(T3/4 sqrtlog T)$ regret bound and an $mathcalO(T3/4 sqrtlog T)$ cumulative constraint violation (CCV) bound for general convex losses.
In this paper, we improve this result and further establish textitnovel regret and CCV bounds when loss functions are strongly convex
arXiv Detail & Related papers (2025-01-27T13:38:51Z) - Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - Riemannian Projection-free Online Learning [5.918057694291832]
The projection operation is a critical component in a range of optimization algorithms, such as online gradient descent (OGD)
It suffers from computational limitations in high-dimensional settings or when dealing with ill-conditioned constraint sets.
This paper presents methods for obtaining sub-linear regret guarantees in online geodesically convex optimization on curved spaces.
arXiv Detail & Related papers (2023-05-30T18:22:09Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - Functional Constrained Optimization for Risk Aversion and Sparsity
Control [7.561780884831967]
Risk and sparsity requirements need to be enforced simultaneously in many applications, e.g., in portfolio optimization, assortment planning, and radiation planning.
We propose a Level Conditional Gradient (LCG) method, which generates a convex or sparse trajectory for these challenges.
We show that the method achieves a level single-set projection of the optimal value an inner conditional approximation (CGO) for solving mini-max sub gradient.
arXiv Detail & Related papers (2022-10-11T02:51:51Z) - Exploiting the Curvature of Feasible Sets for Faster Projection-Free
Online Learning [8.461907111368628]
We develop new efficient projection-free algorithms for Online Convex Optimization (OCO)
We develop an OCO algorithm that makes two calls to an LO Oracle per round and achieves the near-optimal $widetildeO(sqrtT)$ regret.
We also present an algorithm for general convex sets that makes $widetilde O(d)$ expected number of calls to an LO Oracle per round.
arXiv Detail & Related papers (2022-05-23T17:13:46Z) - Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free
Reinforcement Learning [52.76230802067506]
A novel model-free algorithm is proposed to minimize regret in episodic reinforcement learning.
The proposed algorithm employs an em early-settled reference update rule, with the aid of two Q-learning sequences.
The design principle of our early-settled variance reduction method might be of independent interest to other RL settings.
arXiv Detail & Related papers (2021-10-09T21:13:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.