Adversarially Trained Actor Critic for offline CMDPs
- URL: http://arxiv.org/abs/2401.00629v1
- Date: Mon, 1 Jan 2024 01:44:58 GMT
- Title: Adversarially Trained Actor Critic for offline CMDPs
- Authors: Honghao Wei, Xiyue Peng, Xin Liu, Arnob Ghosh
- Abstract summary: We propose a Safe Adversarial Trained Actor Critic (SATAC) algorithm for offline reinforcement learning (RL)
Our framework provides both theoretical guarantees and a robust deep-RL implementation.
We demonstrate that SATAC can produce a policy that outperforms the behavior policy while maintaining the same level of safety.
- Score: 10.861449694255137
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a Safe Adversarial Trained Actor Critic (SATAC) algorithm for
offline reinforcement learning (RL) with general function approximation in the
presence of limited data coverage. SATAC operates as a two-player Stackelberg
game featuring a refined objective function. The actor (leader player)
optimizes the policy against two adversarially trained value critics (follower
players), who focus on scenarios where the actor's performance is inferior to
the behavior policy. Our framework provides both theoretical guarantees and a
robust deep-RL implementation. Theoretically, we demonstrate that when the
actor employs a no-regret optimization oracle, SATAC achieves two guarantees:
(i) For the first time in the offline RL setting, we establish that SATAC can
produce a policy that outperforms the behavior policy while maintaining the
same level of safety, which is critical to designing an algorithm for offline
RL. (ii) We demonstrate that the algorithm guarantees policy improvement across
a broad range of hyperparameters, indicating its practical robustness.
Additionally, we offer a practical version of SATAC and compare it with
existing state-of-the-art offline safe-RL algorithms in continuous control
environments. SATAC outperforms all baselines across a range of tasks, thus
validating the theoretical performance.
Related papers
- Convergence and Sample Complexity of First-Order Methods for Agnostic Reinforcement Learning [66.4260157478436]
We study reinforcement learning in the policy learning setting.<n>The goal is to find a policy whose performance is competitive with the best policy in a given class of interest.
arXiv Detail & Related papers (2025-07-06T14:40:05Z) - Stepwise Alignment for Constrained Language Model Policy Optimization [12.986006070964772]
Safety and trustworthiness are indispensable requirements for real-world applications of AI systems using large language models (LLMs)
This paper formulates human value alignment as an optimization problem of the language model policy to maximize reward under a safety constraint.
One key idea behind SACPO, supported by theory, is that the optimal policy incorporating reward and safety can be directly obtained from a reward-aligned policy.
arXiv Detail & Related papers (2024-04-17T03:44:58Z) - Safety Optimized Reinforcement Learning via Multi-Objective Policy
Optimization [3.425378723819911]
Safe reinforcement learning (Safe RL) refers to a class of techniques that aim to prevent RL algorithms from violating constraints.
In this paper, a novel model-free Safe RL algorithm, formulated based on the multi-objective policy optimization framework is introduced.
arXiv Detail & Related papers (2024-02-23T08:58:38Z) - Probabilistic Reach-Avoid for Bayesian Neural Networks [71.67052234622781]
We show that an optimal synthesis algorithm can provide more than a four-fold increase in the number of certifiable states.
The algorithm is able to provide more than a three-fold increase in the average guaranteed reach-avoid probability.
arXiv Detail & Related papers (2023-10-03T10:52:21Z) - Iterative Reachability Estimation for Safe Reinforcement Learning [23.942701020636882]
We propose a new framework, Reachability Estimation for Safe Policy Optimization (RESPO), for safety-constrained reinforcement learning (RL) environments.
In the feasible set where there exist violation-free policies, we optimize for rewards while maintaining persistent safety.
We evaluate the proposed methods on a diverse suite of safe RL environments from Safety Gym, PyBullet, and MuJoCo.
arXiv Detail & Related papers (2023-09-24T02:36:42Z) - Safe Reinforcement Learning with Dual Robustness [10.455148541147796]
Reinforcement learning (RL) agents are vulnerable to adversarial disturbances.
We propose a systematic framework to unify safe RL and robust RL.
We also design a deep RL algorithm for practical implementation, called dually robust actor-critic (DRAC)
arXiv Detail & Related papers (2023-09-13T09:34:21Z) - A Multiplicative Value Function for Safe and Efficient Reinforcement
Learning [131.96501469927733]
We propose a safe model-free RL algorithm with a novel multiplicative value function consisting of a safety critic and a reward critic.
The safety critic predicts the probability of constraint violation and discounts the reward critic that only estimates constraint-free returns.
We evaluate our method in four safety-focused environments, including classical RL benchmarks augmented with safety constraints and robot navigation tasks with images and raw Lidar scans as observations.
arXiv Detail & Related papers (2023-03-07T18:29:15Z) - Safety Correction from Baseline: Towards the Risk-aware Policy in
Robotics via Dual-agent Reinforcement Learning [64.11013095004786]
We propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent.
Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control.
The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks.
arXiv Detail & Related papers (2022-12-14T03:11:25Z) - Soft Actor-Critic with Cross-Entropy Policy Optimization [0.45687771576879593]
We propose Soft Actor-Critic with Cross-Entropy Policy Optimization (SAC-CEPO)
SAC-CEPO uses Cross-Entropy Method (CEM) to optimize the policy network of SAC.
We show that SAC-CEPO achieves competitive performance against the original SAC.
arXiv Detail & Related papers (2021-12-21T11:38:12Z) - Multi-Objective SPIBB: Seldonian Offline Policy Improvement with Safety
Constraints in Finite MDPs [71.47895794305883]
We study the problem of Safe Policy Improvement (SPI) under constraints in the offline Reinforcement Learning setting.
We present an SPI for this RL setting that takes into account the preferences of the algorithm's user for handling the trade-offs for different reward signals.
arXiv Detail & Related papers (2021-05-31T21:04:21Z) - CRPO: A New Approach for Safe Reinforcement Learning with Convergence
Guarantee [61.176159046544946]
In safe reinforcement learning (SRL) problems, an agent explores the environment to maximize an expected total reward and avoids violation of certain constraints.
This is the first-time analysis of SRL algorithms with global optimal policies.
arXiv Detail & Related papers (2020-11-11T16:05:14Z) - Kalman meets Bellman: Improving Policy Evaluation through Value Tracking [59.691919635037216]
Policy evaluation is a key process in Reinforcement Learning (RL)
We devise an optimization method, called Kalman Optimization for Value Approximation (KOVA)
KOVA minimizes a regularized objective function that concerns both parameter and noisy return uncertainties.
arXiv Detail & Related papers (2020-02-17T13:30:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.