Regret and Cumulative Constraint Violation Analysis for Distributed
Online Constrained Convex Optimization
- URL: http://arxiv.org/abs/2105.00321v1
- Date: Sat, 1 May 2021 18:28:53 GMT
- Title: Regret and Cumulative Constraint Violation Analysis for Distributed
Online Constrained Convex Optimization
- Authors: Xinlei Yi, Xiuxian Li, Tao Yang, Lihua Xie, Tianyou Chai, and Karl H.
Johansson
- Abstract summary: This paper considers the distributed online convex optimization problem with time-varying constraints over a network of agents.
Two algorithms with full-information and bandit feedback are proposed.
- Score: 24.97580261894342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers the distributed online convex optimization problem with
time-varying constraints over a network of agents. This is a sequential
decision making problem with two sequences of arbitrarily varying convex loss
and constraint functions. At each round, each agent selects a decision from the
decision set, and then only a portion of the loss function and a coordinate
block of the constraint function at this round are privately revealed to this
agent. The goal of the network is to minimize network regret and constraint
violation. Two distributed online algorithms with full-information and bandit
feedback are proposed. Both dynamic and static network regret bounds are
analyzed for the proposed algorithms, and network cumulative constraint
violation is used to measure constraint violation, which excludes the situation
that strictly feasible constraints can compensate the effects of violated
constraints. In particular, we show that the proposed algorithms achieve
$\mathcal{O}(T^{\max\{\kappa,1-\kappa\}})$ static network regret and
$\mathcal{O}(T^{1-\kappa/2})$ network cumulative constraint violation, where
$T$ is the total number of rounds and $\kappa\in(0,1)$ is a user-defined
trade-off parameter. Moreover, if the loss functions are strongly convex, then
the static network regret bound can be reduced to $\mathcal{O}(T^{\kappa})$.
Finally, numerical simulations are provided to illustrate the effectiveness of
the theoretical results.
Related papers
- Fixed-Budget Differentially Private Best Arm Identification [62.36929749450298]
We study best arm identification (BAI) in linear bandits in the fixed-budget regime under differential privacy constraints.
We derive a minimax lower bound on the error probability, and demonstrate that the lower and the upper bounds decay exponentially in $T$.
arXiv Detail & Related papers (2024-01-17T09:23:25Z) - Multi-point Feedback of Bandit Convex Optimization with Hard Constraints [1.8130068086063336]
We study bandit convex optimization with constraints, where the learner aims to generate a sequence of decisions under partial information of loss functions.
We adopt the cumulative textithard constraint violation as the metric of constraint violation.
Our algorithm attains $O(d2Tmaxc,1-c)$ regret bounds and $O(d2T1-fracc2)$ cumulative hard constraint violation bounds for convex loss functions and time-varying constraints.
arXiv Detail & Related papers (2023-10-17T02:43:22Z) - Distributed Online Convex Optimization with Adversarial Constraints:
Reduced Cumulative Constraint Violation Bounds under Slater's Condition [29.809415829907525]
This paper considers distributed online convex optimization with adversarial constraints.
Agents collaborate to minimize network regret and cumulative constraint violation.
To the best of our knowledge, this paper is the first to achieve reduced (network) cumulative constraint violation bounds for (distributed) online convex optimization with adversarial constraints.
arXiv Detail & Related papers (2023-05-31T19:39:15Z) - Projection-Free Online Convex Optimization with Stochastic Constraints [0.0]
We develop projection-free algorithms for online convex optimization with constraints.
We deduce sublinear regret and constraint violation bounds for various settings.
We prove that the constraint violation can be reduced to have the same growth as the regret.
arXiv Detail & Related papers (2023-05-02T11:27:34Z) - On Dynamic Regret and Constraint Violations in Constrained Online Convex
Optimization [17.412117389855222]
We propose an algorithm that follows projected gradient descent over a suitably chosen set around the current action.
We show that both the dynamic regret and the constraint violation is order-wise bounded by the it path-length, the sum of the distances between the consecutive optimal actions.
arXiv Detail & Related papers (2023-01-24T04:22:13Z) - A Unifying Framework for Online Optimization with Long-Term Constraints [62.35194099438855]
We study online learning problems in which a decision maker has to take a sequence of decisions subject to $m$ long-term constraints.
The goal is to maximize their total reward, while at the same time achieving small cumulative violation across the $T$ rounds.
We present the first best-of-both-world type algorithm for this general class problems, with no-regret guarantees both in the case in which rewards and constraints are selected according to an unknown model, and in the case in which they are selected at each round by an adversary.
arXiv Detail & Related papers (2022-09-15T16:59:19Z) - Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization [70.4342220499858]
We introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities.
Our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and safeguard the same rate in the worst case.
arXiv Detail & Related papers (2021-12-29T02:42:59Z) - Regret and Cumulative Constraint Violation Analysis for Online Convex
Optimization with Long Term Constraints [24.97580261894342]
This paper considers online convex optimization with long term constraints, where constraints can be violated in intermediate rounds, but need to be satisfied in the long run.
A novel algorithm is first proposed and it achieves an $mathcalO(Tmaxc,1-c)$ bound for static regret and an $mathcalO(T(1-c)/2)$ bound for cumulative constraint violation.
arXiv Detail & Related papers (2021-06-09T15:18:06Z) - Dynamic Regret of Convex and Smooth Functions [93.71361250701075]
We investigate online convex optimization in non-stationary environments.
We choose the dynamic regret as the performance measure.
We show that it is possible to further enhance the dynamic regret by exploiting the smoothness condition.
arXiv Detail & Related papers (2020-07-07T14:10:57Z) - Upper Confidence Primal-Dual Reinforcement Learning for CMDP with
Adversarial Loss [145.54544979467872]
We consider online learning for episodically constrained Markov decision processes (CMDPs)
We propose a new emphupper confidence primal-dual algorithm, which only requires the trajectories sampled from the transition model.
Our analysis incorporates a new high-probability drift analysis of Lagrange multiplier processes into the celebrated regret analysis of upper confidence reinforcement learning.
arXiv Detail & Related papers (2020-03-02T05:02:23Z) - Provably Efficient Safe Exploration via Primal-Dual Policy Optimization [105.7510838453122]
We study the Safe Reinforcement Learning (SRL) problem using the Constrained Markov Decision Process (CMDP) formulation.
We present an provably efficient online policy optimization algorithm for CMDP with safe exploration in the function approximation setting.
arXiv Detail & Related papers (2020-03-01T17:47:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.