Pessimistic Model Selection for Offline Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2111.14346v1
- Date: Mon, 29 Nov 2021 06:29:49 GMT
- Title: Pessimistic Model Selection for Offline Deep Reinforcement Learning
- Authors: Chao-Han Huck Yang, Zhengling Qi, Yifan Cui, Pin-Yu Chen
- Abstract summary: Deep Reinforcement Learning (DRL) has demonstrated great potentials in solving sequential decision making problems in many applications.
One main barrier is the over-fitting issue that leads to poor generalizability of the policy learned by DRL.
We propose a pessimistic model selection (PMS) approach for offline DRL with a theoretical guarantee.
- Score: 56.282483586473816
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Reinforcement Learning (DRL) has demonstrated great potentials in
solving sequential decision making problems in many applications. Despite its
promising performance, practical gaps exist when deploying DRL in real-world
scenarios. One main barrier is the over-fitting issue that leads to poor
generalizability of the policy learned by DRL. In particular, for offline DRL
with observational data, model selection is a challenging task as there is no
ground truth available for performance demonstration, in contrast with the
online setting with simulated environments. In this work, we propose a
pessimistic model selection (PMS) approach for offline DRL with a theoretical
guarantee, which features a provably effective framework for finding the best
policy among a set of candidate models. Two refined approaches are also
proposed to address the potential bias of DRL model in identifying the optimal
policy. Numerical studies demonstrated the superior performance of our approach
over existing methods.
Related papers
- SeMOPO: Learning High-quality Model and Policy from Low-quality Offline Visual Datasets [32.496818080222646]
We propose a new approach to model-based offline reinforcement learning.
We provide a theoretical guarantee of model uncertainty and performance bound of SeMOPO.
Experimental results show that our method substantially outperforms all baseline methods.
arXiv Detail & Related papers (2024-06-13T15:16:38Z) - Bridging Distributionally Robust Learning and Offline RL: An Approach to
Mitigate Distribution Shift and Partial Data Coverage [32.578787778183546]
offline reinforcement learning (RL) algorithms learn optimal polices using historical (offline) data.
One of the main challenges in offline RL is the distribution shift.
We propose two offline RL algorithms using the distributionally robust learning (DRL) framework.
arXiv Detail & Related papers (2023-10-27T19:19:30Z) - Reparameterized Policy Learning for Multimodal Trajectory Optimization [61.13228961771765]
We investigate the challenge of parametrizing policies for reinforcement learning in high-dimensional continuous action spaces.
We propose a principled framework that models the continuous RL policy as a generative model of optimal trajectories.
We present a practical model-based RL method, which leverages the multimodal policy parameterization and learned world model.
arXiv Detail & Related papers (2023-07-20T09:05:46Z) - Deploying Offline Reinforcement Learning with Human Feedback [34.11507483049087]
Reinforcement learning has shown promise for decision-making tasks in real-world applications.
One practical framework involves training parameterized policy models from an offline dataset and deploying them in an online environment.
This approach can be risky since the offline training may not be perfect, leading to poor performance of the RL models that may take dangerous actions.
We propose an alternative framework that involves a human supervising the RL models and providing additional feedback in the online deployment phase.
arXiv Detail & Related papers (2023-03-13T12:13:16Z) - Online Policy Optimization for Robust MDP [17.995448897675068]
Reinforcement learning (RL) has exceeded human performance in many synthetic settings such as video games and Go.
In this work, we consider online robust Markov decision process (MDP) by interacting with an unknown nominal system.
We propose a robust optimistic policy optimization algorithm that is provably efficient.
arXiv Detail & Related papers (2022-09-28T05:18:20Z) - Behavioral Priors and Dynamics Models: Improving Performance and Domain
Transfer in Offline RL [82.93243616342275]
We introduce Offline Model-based RL with Adaptive Behavioral Priors (MABE)
MABE is based on the finding that dynamics models, which support within-domain generalization, and behavioral priors, which support cross-domain generalization, are complementary.
In experiments that require cross-domain generalization, we find that MABE outperforms prior methods.
arXiv Detail & Related papers (2021-06-16T20:48:49Z) - Offline Reinforcement Learning from Images with Latent Space Models [60.69745540036375]
offline reinforcement learning (RL) refers to the problem of learning policies from a static dataset of environment interactions.
We build on recent advances in model-based algorithms for offline RL, and extend them to high-dimensional visual observation spaces.
Our approach is both tractable in practice and corresponds to maximizing a lower bound of the ELBO in the unknown POMDP.
arXiv Detail & Related papers (2020-12-21T18:28:17Z) - MOPO: Model-based Offline Policy Optimization [183.6449600580806]
offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data.
We show that an existing model-based RL algorithm already produces significant gains in the offline setting.
We propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics.
arXiv Detail & Related papers (2020-05-27T08:46:41Z) - MOReL : Model-Based Offline Reinforcement Learning [49.30091375141527]
In offline reinforcement learning (RL), the goal is to learn a highly rewarding policy based solely on a dataset of historical interactions with the environment.
We present MOReL, an algorithmic framework for model-based offline RL.
We show that MOReL matches or exceeds state-of-the-art results in widely studied offline RL benchmarks.
arXiv Detail & Related papers (2020-05-12T17:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.