Rectified Pessimistic-Optimistic Learning for Stochastic Continuum-armed
Bandit with Constraints
- URL: http://arxiv.org/abs/2211.14720v2
- Date: Tue, 29 Nov 2022 04:15:03 GMT
- Title: Rectified Pessimistic-Optimistic Learning for Stochastic Continuum-armed
Bandit with Constraints
- Authors: Hengquan Guo, Qi Zhu, and Xin Liu
- Abstract summary: We optimize a black-box reward function $f(x)$ subject to a black-box constraint function $g(x)leq 0$ over a continuous space.
We propose a Rectified Pessimistic-Optimistic Learning framework (RPOL), a penalty-based method incorporating optimistic and pessimistic GP bandit learning for reward and constraint functions.
- Score: 4.879346089164413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies the problem of stochastic continuum-armed bandit with
constraints (SCBwC), where we optimize a black-box reward function $f(x)$
subject to a black-box constraint function $g(x)\leq 0$ over a continuous space
$\mathcal X$. We model reward and constraint functions via Gaussian processes
(GPs) and propose a Rectified Pessimistic-Optimistic Learning framework (RPOL),
a penalty-based method incorporating optimistic and pessimistic GP bandit
learning for reward and constraint functions, respectively. We consider the
metric of cumulative constraint violation $\sum_{t=1}^T(g(x_t))^{+},$ which is
strictly stronger than the traditional long-term constraint violation
$\sum_{t=1}^Tg(x_t).$ The rectified design for the penalty update and the
pessimistic learning for the constraint function in RPOL guarantee the
cumulative constraint violation is minimal. RPOL can achieve sublinear regret
and cumulative constraint violation for SCBwC and its variants (e.g., under
delayed feedback and non-stationary environment). These theoretical results
match their unconstrained counterparts. Our experiments justify RPOL
outperforms several existing baseline algorithms.
Related papers
- Online Convex Optimization with Stochastic Constraints: Zero Constraint
Violation and Bandit Feedback [0.0]
We propose a variant of the drift-plus-penalty algorithm that guarantees $O(sqrtT)$ expected regret and zero constraint violation.
Our algorithm is oblivious to the length of the time horizon $T$, in contrast to the vanilla drift-plus-penalty method.
arXiv Detail & Related papers (2023-01-26T18:04:26Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - Penalized Proximal Policy Optimization for Safe Reinforcement Learning [68.86485583981866]
We propose Penalized Proximal Policy Optimization (P3O), which solves the cumbersome constrained policy iteration via a single minimization of an equivalent unconstrained problem.
P3O utilizes a simple-yet-effective penalty function to eliminate cost constraints and removes the trust-region constraint by the clipped surrogate objective.
We show that P3O outperforms state-of-the-art algorithms with respect to both reward improvement and constraint satisfaction on a set of constrained locomotive tasks.
arXiv Detail & Related papers (2022-05-24T06:15:51Z) - Towards Painless Policy Optimization for Constrained MDPs [46.12526917024248]
We study policy optimization in an infinite horizon, $gamma$-discounted constrained Markov decision process (CMDP)
Our objective is to return a policy that achieves large expected reward with a small constraint violation.
We propose a generic primal-dual framework that allows us to bound the reward sub-optimality and constraint violation for arbitrary algorithms.
arXiv Detail & Related papers (2022-04-11T15:08:09Z) - Misspecified Gaussian Process Bandit Optimization [59.30399661155574]
Kernelized bandit algorithms have shown strong empirical and theoretical performance for this problem.
We introduce a emphmisspecified kernelized bandit setting where the unknown function can be $epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS)
We show that our algorithm achieves optimal dependence on $epsilon$ with no prior knowledge of misspecification.
arXiv Detail & Related papers (2021-11-09T09:00:02Z) - Achieving Zero Constraint Violation for Constrained Reinforcement
Learning via Primal-Dual Approach [37.80609997145897]
Reinforcement learning is widely used in applications where one needs to perform sequential decisions while interacting with the environment.
The problem becomes more challenging when the decision requirement includes satisfying some safety constraints.
Various algorithms are available to solve CMDP problems in a model-free manner to achieve $epsilon$-optimal cumulative reward with $epsilon$ feasible policies.
An important question here is whether we can achieve $epsilon$-optimal cumulative reward with zero constraint violations or not.
arXiv Detail & Related papers (2021-09-13T21:27:03Z) - Shortest-Path Constrained Reinforcement Learning for Sparse Reward Tasks [59.419152768018506]
We show that any optimal policy necessarily satisfies the k-SP constraint.
We propose a novel cost function that penalizes the policy violating SP constraint, instead of completely excluding it.
Our experiments on MiniGrid, DeepMind Lab, Atari, and Fetch show that the proposed method significantly improves proximal policy optimization (PPO)
arXiv Detail & Related papers (2021-07-13T21:39:21Z) - Regret and Cumulative Constraint Violation Analysis for Online Convex
Optimization with Long Term Constraints [24.97580261894342]
This paper considers online convex optimization with long term constraints, where constraints can be violated in intermediate rounds, but need to be satisfied in the long run.
A novel algorithm is first proposed and it achieves an $mathcalO(Tmaxc,1-c)$ bound for static regret and an $mathcalO(T(1-c)/2)$ bound for cumulative constraint violation.
arXiv Detail & Related papers (2021-06-09T15:18:06Z) - Stochastic Bandits with Linear Constraints [69.757694218456]
We study a constrained contextual linear bandit setting, where the goal of the agent is to produce a sequence of policies.
We propose an upper-confidence bound algorithm for this problem, called optimistic pessimistic linear bandit (OPLB)
arXiv Detail & Related papers (2020-06-17T22:32:19Z) - Online DR-Submodular Maximization with Stochastic Cumulative Constraints [17.660958043781154]
We consider online continuous DR-submodular with linear long-term constraints.
Online Lagrangian Frank-Wolfe (OLFW) algorithm to solve this class of online problems.
arXiv Detail & Related papers (2020-05-29T17:55:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.