Policy Learning for Robust Markov Decision Process with a Mismatched
Generative Model
- URL: http://arxiv.org/abs/2203.06587v2
- Date: Tue, 15 Mar 2022 03:32:13 GMT
- Title: Policy Learning for Robust Markov Decision Process with a Mismatched
Generative Model
- Authors: Jialian Li, Tongzheng Ren, Dong Yan, Hang Su, Jun Zhu
- Abstract summary: In high-stake scenarios like medical treatment and auto-piloting, it's risky or even infeasible to collect online experimental data to train the agent.
We consider policy learning for Robust Markov Decision Processes (RMDP), where the agent tries to seek a robust policy with respect to unexpected perturbations on the environments.
Our goal is to identify a near-optimal robust policy for the perturbed testing environment, which introduces additional technical difficulties.
- Score: 42.28001762749647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In high-stake scenarios like medical treatment and auto-piloting, it's risky
or even infeasible to collect online experimental data to train the agent.
Simulation-based training can alleviate this issue, but may suffer from its
inherent mismatches from the simulator and real environment. It is therefore
imperative to utilize the simulator to learn a robust policy for the real-world
deployment. In this work, we consider policy learning for Robust Markov
Decision Processes (RMDP), where the agent tries to seek a robust policy with
respect to unexpected perturbations on the environments. Specifically, we focus
on the setting where the training environment can be characterized as a
generative model and a constrained perturbation can be added to the model
during testing. Our goal is to identify a near-optimal robust policy for the
perturbed testing environment, which introduces additional technical
difficulties as we need to simultaneously estimate the training environment
uncertainty from samples and find the worst-case perturbation for testing. To
solve this issue, we propose a generic method which formalizes the perturbation
as an opponent to obtain a two-player zero-sum game, and further show that the
Nash Equilibrium corresponds to the robust policy. We prove that, with a
polynomial number of samples from the generative model, our algorithm can find
a near-optimal robust policy with a high probability. Our method is able to
deal with general perturbations under some mild assumptions and can also be
extended to more complex problems like robust partial observable Markov
decision process, thanks to the game-theoretical formulation.
Related papers
- Efficient Imitation Learning with Conservative World Models [54.52140201148341]
We tackle the problem of policy learning from expert demonstrations without a reward function.
We re-frame imitation learning as a fine-tuning problem, rather than a pure reinforcement learning one.
arXiv Detail & Related papers (2024-05-21T20:53:18Z) - Robust Deep Reinforcement Learning with Adaptive Adversarial Perturbations in Action Space [3.639580365066386]
We propose an adaptive adversarial coefficient framework to adjust the effect of the adversarial perturbation during training.
The appealing feature of our method is that it is simple to deploy in real-world applications and does not require accessing the simulator in advance.
The experiments in MuJoCo show that our method can improve the training stability and learn a robust policy when migrated to different test environments.
arXiv Detail & Related papers (2024-05-20T12:31:11Z) - Learning Optimal Deterministic Policies with Stochastic Policy Gradients [62.81324245896716]
Policy gradient (PG) methods are successful approaches to deal with continuous reinforcement learning (RL) problems.
In common practice, convergence (hyper)policies are learned only to deploy their deterministic version.
We show how to tune the exploration level used for learning to optimize the trade-off between the sample complexity and the performance of the deployed deterministic policy.
arXiv Detail & Related papers (2024-05-03T16:45:15Z) - Sample-Efficient Robust Multi-Agent Reinforcement Learning in the Face of Environmental Uncertainty [40.55653383218379]
This work focuses on learning in distributionally robust Markov games (RMGs)
We propose a sample-efficient model-based algorithm (DRNVI) with finite-sample complexity guarantees for learning robust variants of various notions of game-theoretic equilibria.
arXiv Detail & Related papers (2024-04-29T17:51:47Z) - Bayesian Risk-Averse Q-Learning with Streaming Observations [7.330349128557128]
We consider a robust reinforcement learning problem, where a learning agent learns from a simulated training environment.
Observations from the real environment that is out of the agent's control arrive periodically.
We develop a multi-stage Bayesian risk-averse Q-learning algorithm to solve BRMDP with streaming observations from real environment.
arXiv Detail & Related papers (2023-05-18T20:48:50Z) - Max-Min Off-Policy Actor-Critic Method Focusing on Worst-Case Robustness
to Model Misspecification [22.241676350331968]
This study focuses on scenarios involving a simulation environment with uncertainty parameters and the set of their possible values.
The aim is to optimize the worst-case performance on the uncertainty parameter set to guarantee the performance in the corresponding real-world environment.
Experiments in multi-joint dynamics with contact (MuJoCo) environments show that the proposed method exhibited a worst-case performance superior to several baseline approaches.
arXiv Detail & Related papers (2022-11-07T10:18:31Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - Sample Complexity of Robust Reinforcement Learning with a Generative
Model [0.0]
We propose a model-based reinforcement learning (RL) algorithm for learning an $epsilon$-optimal robust policy.
We consider three different forms of uncertainty sets, characterized by the total variation distance, chi-square divergence, and KL divergence.
In addition to the sample complexity results, we also present a formal analytical argument on the benefit of using robust policies.
arXiv Detail & Related papers (2021-12-02T18:55:51Z) - Deep Reinforcement Learning amidst Lifelong Non-Stationarity [67.24635298387624]
We show that an off-policy RL algorithm can reason about and tackle lifelong non-stationarity.
Our method leverages latent variable models to learn a representation of the environment from current and past experiences.
We also introduce several simulation environments that exhibit lifelong non-stationarity, and empirically find that our approach substantially outperforms approaches that do not reason about environment shift.
arXiv Detail & Related papers (2020-06-18T17:34:50Z) - Guided Uncertainty-Aware Policy Optimization: Combining Learning and
Model-Based Strategies for Sample-Efficient Policy Learning [75.56839075060819]
Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.
reinforcement learning approaches can operate directly from raw sensory inputs with only a reward signal to describe the task, but are extremely sample-inefficient and brittle.
In this work, we combine the strengths of model-based methods with the flexibility of learning-based methods to obtain a general method that is able to overcome inaccuracies in the robotics perception/actuation pipeline.
arXiv Detail & Related papers (2020-05-21T19:47:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.