Resilient Constrained Reinforcement Learning
- URL: http://arxiv.org/abs/2312.17194v2
- Date: Fri, 29 Dec 2023 21:29:32 GMT
- Title: Resilient Constrained Reinforcement Learning
- Authors: Dongsheng Ding and Zhengyan Huan and Alejandro Ribeiro
- Abstract summary: We study a class of constrained reinforcement learning (RL) problems in which multiple constraint specifications are not identified before study.
It is challenging to identify appropriate constraint specifications due to the undefined trade-off between the reward training objective and the constraint satisfaction.
We propose a new constrained RL approach that searches for policy and constraint specifications together.
- Score: 87.4374430686956
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We study a class of constrained reinforcement learning (RL) problems in which
multiple constraint specifications are not identified before training. It is
challenging to identify appropriate constraint specifications due to the
undefined trade-off between the reward maximization objective and the
constraint satisfaction, which is ubiquitous in constrained decision-making. To
tackle this issue, we propose a new constrained RL approach that searches for
policy and constraint specifications together. This method features the
adaptation of relaxing the constraint according to a relaxation cost introduced
in the learning objective. Since this feature mimics how ecological systems
adapt to disruptions by altering operation, our approach is termed as resilient
constrained RL. Specifically, we provide a set of sufficient conditions that
balance the constraint satisfaction and the reward maximization in notion of
resilient equilibrium, propose a tractable formulation of resilient constrained
policy optimization that takes this equilibrium as an optimal solution, and
advocate two resilient constrained policy search algorithms with non-asymptotic
convergence guarantees on the optimality gap and constraint satisfaction.
Furthermore, we demonstrate the merits and the effectiveness of our approach in
computational experiments.
Related papers
- Probabilistic Satisfaction of Temporal Logic Constraints in Reinforcement Learning via Adaptive Policy-Switching [0.0]
Constrained Reinforcement Learning (CRL) is a subset of machine learning that introduces constraints into the traditional reinforcement learning (RL) framework.
We propose a novel framework that relies on switching between pure learning (reward) and constraint satisfaction.
arXiv Detail & Related papers (2024-10-10T15:19:45Z) - Exterior Penalty Policy Optimization with Penalty Metric Network under Constraints [52.37099916582462]
In Constrained Reinforcement Learning (CRL), agents explore the environment to learn the optimal policy while satisfying constraints.
We propose a theoretically guaranteed penalty function method, Exterior Penalty Policy Optimization (EPO), with adaptive penalties generated by a Penalty Metric Network (PMN)
PMN responds appropriately to varying degrees of constraint violations, enabling efficient constraint satisfaction and safe exploration.
arXiv Detail & Related papers (2024-07-22T10:57:32Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Evolving Constrained Reinforcement Learning Policy [5.4444944707433525]
We propose a novel evolutionary constrained reinforcement learning algorithm, which adaptively balances the reward and constraint violation with ranking.
Experiments on robotic control benchmarks show that our ECRL achieves outstanding performance compared to state-of-the-art algorithms.
arXiv Detail & Related papers (2023-04-19T03:54:31Z) - Penalized Proximal Policy Optimization for Safe Reinforcement Learning [68.86485583981866]
We propose Penalized Proximal Policy Optimization (P3O), which solves the cumbersome constrained policy iteration via a single minimization of an equivalent unconstrained problem.
P3O utilizes a simple-yet-effective penalty function to eliminate cost constraints and removes the trust-region constraint by the clipped surrogate objective.
We show that P3O outperforms state-of-the-art algorithms with respect to both reward improvement and constraint satisfaction on a set of constrained locomotive tasks.
arXiv Detail & Related papers (2022-05-24T06:15:51Z) - COptiDICE: Offline Constrained Reinforcement Learning via Stationary
Distribution Correction Estimation [73.17078343706909]
offline constrained reinforcement learning (RL) problem, in which the agent aims to compute a policy that maximizes expected return while satisfying given cost constraints, learning only from a pre-collected dataset.
We present an offline constrained RL algorithm that optimize the policy in the space of the stationary distribution.
Our algorithm, COptiDICE, directly estimates the stationary distribution corrections of the optimal policy with respect to returns, while constraining the cost upper bound, with the goal of yielding a cost-conservative policy for actual constraint satisfaction.
arXiv Detail & Related papers (2022-04-19T15:55:47Z) - A Globally Convergent Evolutionary Strategy for Stochastic Constrained
Optimization with Applications to Reinforcement Learning [0.6445605125467573]
Evolutionary strategies have been shown to achieve competing levels of performance for complex optimization problems in reinforcement learning.
Convergence guarantees for evolutionary strategies to optimize constrained problems are however lacking in the literature.
arXiv Detail & Related papers (2022-02-21T17:04:51Z) - Constrained Reinforcement Learning for Dynamic Optimization under
Uncertainty [1.5797349391370117]
Dynamic real-time optimization (DRTO) is a challenging task due to the fact that optimal operating conditions must be computed in real time.
The main bottleneck in the industrial application of DRTO is the presence of uncertainty.
We present a constrained reinforcement learning (RL) based approach to accommodate these difficulties.
arXiv Detail & Related papers (2020-06-04T10:17:35Z) - Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot
Locomotion [78.46388769788405]
We introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained policy optimization (CPPO)
We show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.
arXiv Detail & Related papers (2020-02-22T10:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.