Multi-Objective $\textit{min-max}$ Online Convex Optimization
- URL: http://arxiv.org/abs/2510.13560v1
- Date: Wed, 15 Oct 2025 13:59:44 GMT
- Title: Multi-Objective $\textit{min-max}$ Online Convex Optimization
- Authors: Rahul Vaze, Sumiran Mishra,
- Abstract summary: In online convex optimization (OCO), a single loss function sequence is revealed over a time horizon of $T$.<n>In this paper, we consider multi-objective OCO, where there are $K$ distinct loss function sequences.<n>We show that an expected $textitmin-max$ regret is $O(sqrtT log K)$.
- Score: 4.6838063911731025
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In online convex optimization (OCO), a single loss function sequence is revealed over a time horizon of $T$, and an online algorithm has to choose its action at time $t$, before the loss function at time $t$ is revealed. The goal of the online algorithm is to incur minimal penalty (called $\textit{regret}$ compared to a static optimal action made by an optimal offline algorithm knowing all functions of the sequence in advance. In this paper, we broaden the horizon of OCO, and consider multi-objective OCO, where there are $K$ distinct loss function sequences, and an algorithm has to choose its action at time $t$, before the $K$ loss functions at time $t$ are revealed. To capture the tradeoff between tracking the $K$ different sequences, we consider the $\textit{min-max}$ regret, where the benchmark (optimal offline algorithm) takes a static action across all time slots that minimizes the maximum of the total loss (summed across time slots) incurred by each of the $K$ sequences. An online algorithm is allowed to change its action across time slots, and its {\it min-max} regret is defined as the difference between its $\textit{min-max}$ cost and that of the benchmark. The $\textit{min-max}$ regret is a stringent performance measure and an algorithm with small regret needs to `track' all loss function sequences closely at all times. We consider this $\textit{min-max}$ regret in the i.i.d. input setting where all loss functions are i.i.d. generated from an unknown distribution. For the i.i.d. model we propose a simple algorithm that combines the well-known $\textit{Hedge}$ and online gradient descent (OGD) and show via a remarkably simple proof that its expected $\textit{min-max}$ regret is $O(\sqrt{T \log K})$.
Related papers
- Optimal Anytime Algorithms for Online Convex Optimization with Adversarial Constraints [7.798233121583888]
We propose an anytime online algorithm for the problem of learning a sequence of adversarial convex cost functions.<n>Our proposed algorithm achieves optimal performance bounds without resorting to the standard doubling trick.<n>We show that our algorithm achieves $O(sqrtt)$ regret and $tildeO(sqrtt)$ cumulative constraint violation bounds for any $tgeq 1$.
arXiv Detail & Related papers (2025-10-26T08:35:37Z) - Exploiting Curvature in Online Convex Optimization with Delayed Feedback [6.390468088226495]
We study the online convex optimization problem with curved losses and delayed feedback.<n>We propose a variant of follow-the-regularized-leader that obtains regret of order $minsigma_maxln T, sqrtd_mathrmtot$.<n>We then consider exp-concave losses and extend the Online Newton Step algorithm to handle delays with an adaptive learning rate tuning.
arXiv Detail & Related papers (2025-06-09T09:49:54Z) - Near-optimal Regret Using Policy Optimization in Online MDPs with Aggregate Bandit Feedback [49.84060509296641]
We study online finite-horizon Markov Decision Processes with adversarially changing loss and aggregate bandit feedback (a.k.a full-bandit)<n>Under this type of feedback, the agent observes only the total loss incurred over the entire trajectory, rather than the individual losses at each intermediate step within the trajectory.<n>We introduce the first Policy Optimization algorithms for this setting.
arXiv Detail & Related papers (2025-02-06T12:03:24Z) - Near-Optimal Algorithms for Omniprediction [6.874077229518565]
We give near-optimal learning algorithms for omniprediction, in both the online and offline settings.<n>Online learning algorithm achieves near-optimal complexity across a number of measures.<n> offline learning algorithm returns an efficient $(mathcalL_mathrmBV,mathcalH,varepsilon(m)$)
arXiv Detail & Related papers (2025-01-28T02:58:37Z) - An Optimistic Algorithm for Online Convex Optimization with Adversarial Constraints [55.2480439325792]
We study Online Convex Optimization (OCO) with adversarial constraints.<n>We focus on a setting where the algorithm has access to predictions of the loss and constraint functions.<n>Our results show that we can improve the current best bounds of $ O(sqrtT) $ regret and $ tildeO(sqrtT) $ cumulative constraint violations.
arXiv Detail & Related papers (2024-12-11T03:06:42Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - An Efficient Interior-Point Method for Online Convex Optimization [31.771131314017385]
The algorithm is adaptive, in the sense that the regret bounds hold not only for the time periods $1,ldots,T$ but also for every sub-interval $s,s+1,ldots,t$.
The running time of the algorithm matches that of newly introduced interior point algorithms for regret minimization.
arXiv Detail & Related papers (2023-07-21T16:12:46Z) - Non-stationary Online Convex Optimization with Arbitrary Delays [50.46856739179311]
This paper investigates the delayed online convex optimization (OCO) in non-stationary environments.
We first propose a simple algorithm, namely DOGD, which performs a gradient descent step for each delayed gradient according to their arrival order.
We develop an improved algorithm, which reduces those dynamic regret bounds achieved by DOGD to $O(sqrtbardT(P_T+1))$.
arXiv Detail & Related papers (2023-05-20T07:54:07Z) - Eluder-based Regret for Stochastic Contextual MDPs [43.19667415823089]
We present the E-UC$3$RL algorithm for regret minimization in Contextual Markov Decision Processes (CMDPs)
Our algorithm is efficient (assuming efficient offline regression oracles) and enjoys a regret guarantee of $ widetildeO(H3 sqrtT |S| |A|d_mathrmE(mathcalP)$.
arXiv Detail & Related papers (2022-11-27T20:38:47Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.