A Regularized Implicit Policy for Offline Reinforcement Learning
- URL: http://arxiv.org/abs/2202.09673v1
- Date: Sat, 19 Feb 2022 20:22:04 GMT
- Title: A Regularized Implicit Policy for Offline Reinforcement Learning
- Authors: Shentao Yang, Zhendong Wang, Huangjie Zheng, Yihao Feng, Mingyuan Zhou
- Abstract summary: offline reinforcement learning enables learning from a fixed dataset, without further interactions with the environment.
We propose a framework that supports learning a flexible yet well-regularized fully-implicit policy.
Experiments and ablation study on the D4RL dataset validate our framework and the effectiveness of our algorithmic designs.
- Score: 54.7427227775581
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Offline reinforcement learning enables learning from a fixed dataset, without
further interactions with the environment. The lack of environmental
interactions makes the policy training vulnerable to state-action pairs far
from the training dataset and prone to missing rewarding actions. For training
more effective agents, we propose a framework that supports learning a flexible
yet well-regularized fully-implicit policy. We further propose a simple
modification to the classical policy-matching methods for regularizing with
respect to the dual form of the Jensen--Shannon divergence and the integral
probability metrics. We theoretically show the correctness of the
policy-matching approach, and the correctness and a good finite-sample property
of our modification. An effective instantiation of our framework through the
GAN structure is provided, together with techniques to explicitly smooth the
state-action mapping for robust generalization beyond the static dataset.
Extensive experiments and ablation study on the D4RL dataset validate our
framework and the effectiveness of our algorithmic designs.
Related papers
- Adaptive Event-triggered Reinforcement Learning Control for Complex Nonlinear Systems [2.08099858257632]
We propose an adaptive event-triggered reinforcement learning control for continuous-time nonlinear systems.
We show that accurate and efficient determination of triggering conditions is possible without the need for explicit learning triggering conditions.
arXiv Detail & Related papers (2024-09-29T20:42:19Z) - SAMBO-RL: Shifts-aware Model-based Offline Reinforcement Learning [9.88109749688605]
Model-based Offline Reinforcement Learning trains policies based on offline datasets and model dynamics.
This paper disentangles the problem into two key components: model bias and policy shift.
We introduce Shifts-aware Model-based Offline Reinforcement Learning (SAMBO-RL)
arXiv Detail & Related papers (2024-08-23T04:25:09Z) - Statistically Efficient Variance Reduction with Double Policy Estimation
for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning [53.97273491846883]
We propose DPE: an RL algorithm that blends offline sequence modeling and offline reinforcement learning with Double Policy Estimation.
We validate our method in multiple tasks of OpenAI Gym with D4RL benchmarks.
arXiv Detail & Related papers (2023-08-28T20:46:07Z) - Towards Theoretical Understanding of Data-Driven Policy Refinement [0.0]
This paper presents an approach for data-driven policy refinement in reinforcement learning, specifically designed for safety-critical applications.
Our principal contribution lies in the mathematical formulation of this data-driven policy refinement concept.
We present a series of theorems elucidating key theoretical properties of our approach, including convergence, robustness bounds, generalization error, and resilience to model mismatch.
arXiv Detail & Related papers (2023-05-11T13:36:21Z) - Offline Reinforcement Learning with Closed-Form Policy Improvement
Operators [88.54210578912554]
Behavior constrained policy optimization has been demonstrated to be a successful paradigm for tackling Offline Reinforcement Learning.
In this paper, we propose our closed-form policy improvement operators.
We empirically demonstrate their effectiveness over state-of-the-art algorithms on the standard D4RL benchmark.
arXiv Detail & Related papers (2022-11-29T06:29:26Z) - Offline Reinforcement Learning with Adaptive Behavior Regularization [1.491109220586182]
offline reinforcement learning (RL) defines a sample-efficient learning paradigm, where a policy is learned from static and previously collected datasets.
We propose a novel approach, which we refer to as adaptive behavior regularization (ABR)
ABR enables the policy to adaptively adjust its optimization objective between cloning and improving over the policy used to generate the dataset.
arXiv Detail & Related papers (2022-11-15T15:59:11Z) - Latent-Variable Advantage-Weighted Policy Optimization for Offline RL [70.01851346635637]
offline reinforcement learning methods hold the promise of learning policies from pre-collected datasets without the need to query the environment for new transitions.
In practice, offline datasets are often heterogeneous, i.e., collected in a variety of scenarios.
We propose to leverage latent-variable policies that can represent a broader class of policy distributions.
Our method improves the average performance of the next best-performing offline reinforcement learning methods by 49% on heterogeneous datasets.
arXiv Detail & Related papers (2022-03-16T21:17:03Z) - Non-Stationary Off-Policy Optimization [50.41335279896062]
We study the novel problem of off-policy optimization in piecewise-stationary contextual bandits.
In the offline learning phase, we partition logged data into categorical latent states and learn a near-optimal sub-policy for each state.
In the online deployment phase, we adaptively switch between the learned sub-policies based on their performance.
arXiv Detail & Related papers (2020-06-15T09:16:09Z) - Deep Reinforcement Learning with Robust and Smooth Policy [90.78795857181727]
We propose to learn a smooth policy that behaves smoothly with respect to states.
We develop a new framework -- textbfSmooth textbfRegularized textbfReinforcement textbfLearning ($textbfSR2textbfL$), where the policy is trained with smoothness-inducing regularization.
Such regularization effectively constrains the search space, and enforces smoothness in the learned policy.
arXiv Detail & Related papers (2020-03-21T00:10:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.