Bring Your Own (Non-Robust) Algorithm to Solve Robust MDPs by Estimating
The Worst Kernel
- URL: http://arxiv.org/abs/2306.05859v2
- Date: Mon, 12 Feb 2024 11:19:09 GMT
- Title: Bring Your Own (Non-Robust) Algorithm to Solve Robust MDPs by Estimating
The Worst Kernel
- Authors: Kaixin Wang, Uri Gadot, Navdeep Kumar, Kfir Levy, Shie Mannor
- Abstract summary: We present EWoK, a novel online approach to solve RMDP that Estimates the Worst transition Kernel to learn robust policies.
EWoK achieves robustness by simulating the worst scenarios for the agent while retaining complete flexibility in the learning process.
Our experiments, spanning from simple Cartpole to high-dimensional DeepMind Control Suite environments, demonstrate the effectiveness and applicability of EWoK.
- Score: 46.373217780462944
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robust Markov Decision Processes (RMDPs) provide a framework for sequential
decision-making that is robust to perturbations on the transition kernel.
However, current RMDP methods are often limited to small-scale problems,
hindering their use in high-dimensional domains. To bridge this gap, we present
EWoK, a novel online approach to solve RMDP that Estimates the Worst transition
Kernel to learn robust policies. Unlike previous works that regularize the
policy or value updates, EWoK achieves robustness by simulating the worst
scenarios for the agent while retaining complete flexibility in the learning
process. Notably, EWoK can be applied on top of any off-the-shelf {\em
non-robust} RL algorithm, enabling easy scaling to high-dimensional domains.
Our experiments, spanning from simple Cartpole to high-dimensional DeepMind
Control Suite environments, demonstrate the effectiveness and applicability of
the EWoK paradigm as a practical method for learning robust policies.
Related papers
- Online MDP with Transition Prototypes: A Robust Adaptive Approach [8.556972018137147]
We consider an online robust Markov Decision Process (MDP) where we have the information of finitely many prototypes of the underlying transition kernel.
We propose an algorithm that efficiently identifies the true underlying transition kernel while guaranteeing the performance of the corresponding robust policy.
arXiv Detail & Related papers (2024-12-18T17:19:55Z) - Robust Offline Reinforcement Learning with Linearly Structured $f$-Divergence Regularization [10.465789490644031]
We propose a novel framework for robust regularized Markov decision process ($d$-RRMDP)
For the offline RL setting, we develop a family of algorithms, Robust Regularized Pessimistic Value Iteration (R2PVI)
arXiv Detail & Related papers (2024-11-27T18:57:03Z) - Efficiently Training Deep-Learning Parametric Policies using Lagrangian Duality [55.06411438416805]
Constrained Markov Decision Processes (CMDPs) are critical in many high-stakes applications.
This paper introduces a novel approach, Two-Stage Deep Decision Rules (TS- DDR) to efficiently train parametric actor policies.
It is shown to enhance solution quality and to reduce computation times by several orders of magnitude when compared to current state-of-the-art methods.
arXiv Detail & Related papers (2024-05-23T18:19:47Z) - Policy Gradient for Rectangular Robust Markov Decision Processes [62.397882389472564]
We introduce robust policy gradient (RPG), a policy-based method that efficiently solves rectangular robust Markov decision processes (MDPs)
Our resulting RPG can be estimated from data with the same time complexity as its non-robust equivalent.
arXiv Detail & Related papers (2023-01-31T12:40:50Z) - Policy Gradient in Robust MDPs with Global Convergence Guarantee [13.40471012593073]
Robust Markov decision processes (RMDPs) provide a promising framework for computing reliable policies in the face of model errors.
This paper proposes a new Double-Loop Robust Policy Gradient (DRPG), the first generic policy gradient method for RMDPs.
In contrast with prior robust policy gradient algorithms, DRPG monotonically reduces approximation errors to guarantee convergence to a globally optimal policy.
arXiv Detail & Related papers (2022-12-20T17:14:14Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - Reinforcement Learning with a Terminator [80.34572413850186]
We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds.
We use these to construct a provably-efficient algorithm, which accounts for termination, and bound its regret.
arXiv Detail & Related papers (2022-05-30T18:40:28Z) - Safe Exploration by Solving Early Terminated MDP [77.10563395197045]
We introduce a new approach to address safe RL problems under the framework of Early TerminatedP (ET-MDP)
We first define the ET-MDP as an unconstrained algorithm with the same optimal value function as its corresponding CMDP.
An off-policy algorithm based on context models is then proposed to solve the ET-MDP, which thereby solves the corresponding CMDP with better performance and improved learning efficiency.
arXiv Detail & Related papers (2021-07-09T04:24:40Z) - Robust Reinforcement Learning using Least Squares Policy Iteration with
Provable Performance Guarantees [3.8073142980733]
This paper addresses the problem of model-free reinforcement learning for Robust Markov Decision Process (RMDP) with large state spaces.
We first propose the Robust Least Squares Policy Evaluation algorithm, which is a multi-step online model-free learning algorithm for policy evaluation.
We then propose Robust Least Squares Policy Iteration (RLSPI) algorithm for learning the optimal robust policy.
arXiv Detail & Related papers (2020-06-20T16:26:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.