BRAC+: Improved Behavior Regularized Actor Critic for Offline
Reinforcement Learning
- URL: http://arxiv.org/abs/2110.00894v1
- Date: Sat, 2 Oct 2021 23:55:49 GMT
- Title: BRAC+: Improved Behavior Regularized Actor Critic for Offline
Reinforcement Learning
- Authors: Chi Zhang, Sanmukh Rao Kuppannagari, Viktor K Prasanna
- Abstract summary: Offline Reinforcement Learning aims to train effective policies using previously collected datasets.
Standard off-policy RL algorithms are prone to overestimations of the values of out-of-distribution (less explored) actions.
We improve the behavior regularized offline reinforcement learning and propose BRAC+.
- Score: 14.432131909590824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online interactions with the environment to collect data samples for training
a Reinforcement Learning (RL) agent is not always feasible due to economic and
safety concerns. The goal of Offline Reinforcement Learning is to address this
problem by learning effective policies using previously collected datasets.
Standard off-policy RL algorithms are prone to overestimations of the values of
out-of-distribution (less explored) actions and are hence unsuitable for
Offline RL. Behavior regularization, which constraints the learned policy
within the support set of the dataset, has been proposed to tackle the
limitations of standard off-policy algorithms. In this paper, we improve the
behavior regularized offline reinforcement learning and propose BRAC+. First,
we propose quantification of the out-of-distribution actions and conduct
comparisons between using Kullback-Leibler divergence versus using Maximum Mean
Discrepancy as the regularization protocol. We propose an analytical upper
bound on the KL divergence as the behavior regularizer to reduce variance
associated with sample based estimations. Second, we mathematically show that
the learned Q values can diverge even using behavior regularized policy update
under mild assumptions. This leads to large overestimations of the Q values and
performance deterioration of the learned policy. To mitigate this issue, we add
a gradient penalty term to the policy evaluation objective. By doing so, the Q
values are guaranteed to converge. On challenging offline RL benchmarks, BRAC+
outperforms the baseline behavior regularized approaches by 40%~87% and the
state-of-the-art approach by 6%.
Related papers
- Statistically Efficient Variance Reduction with Double Policy Estimation
for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning [53.97273491846883]
We propose DPE: an RL algorithm that blends offline sequence modeling and offline reinforcement learning with Double Policy Estimation.
We validate our method in multiple tasks of OpenAI Gym with D4RL benchmarks.
arXiv Detail & Related papers (2023-08-28T20:46:07Z) - Offline Reinforcement Learning with Adaptive Behavior Regularization [1.491109220586182]
offline reinforcement learning (RL) defines a sample-efficient learning paradigm, where a policy is learned from static and previously collected datasets.
We propose a novel approach, which we refer to as adaptive behavior regularization (ABR)
ABR enables the policy to adaptively adjust its optimization objective between cloning and improving over the policy used to generate the dataset.
arXiv Detail & Related papers (2022-11-15T15:59:11Z) - Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement
Learning [125.8224674893018]
Offline Reinforcement Learning (RL) aims to learn policies from previously collected datasets without exploring the environment.
Applying off-policy algorithms to offline RL usually fails due to the extrapolation error caused by the out-of-distribution (OOD) actions.
We propose Pessimistic Bootstrapping for offline RL (PBRL), a purely uncertainty-driven offline algorithm without explicit policy constraints.
arXiv Detail & Related papers (2022-02-23T15:27:16Z) - Curriculum Offline Imitation Learning [72.1015201041391]
offline reinforcement learning tasks require the agent to learn from a pre-collected dataset with no further interactions with the environment.
We propose textitCurriculum Offline Learning (COIL), which utilizes an experience picking strategy for imitating from adaptive neighboring policies with a higher return.
On continuous control benchmarks, we compare COIL against both imitation-based and RL-based methods, showing that it not only avoids just learning a mediocre behavior on mixed datasets but is also even competitive with state-of-the-art offline RL methods.
arXiv Detail & Related papers (2021-11-03T08:02:48Z) - Offline Reinforcement Learning with Implicit Q-Learning [85.62618088890787]
Current offline reinforcement learning methods need to query the value of unseen actions during training to improve the policy.
We propose an offline RL method that never needs to evaluate actions outside of the dataset.
This method enables the learned policy to improve substantially over the best behavior in the data through generalization.
arXiv Detail & Related papers (2021-10-12T17:05:05Z) - Offline Reinforcement Learning with Fisher Divergence Critic
Regularization [41.085156836450466]
We propose an alternative approach to encouraging the learned policy to stay close to the data, namely parameterizing the critic as the log-behavior-policy.
Behavior regularization then corresponds to an appropriate regularizer on the offset term.
Our algorithm Fisher-BRC achieves both improved performance and faster convergence over existing state-of-the-art methods.
arXiv Detail & Related papers (2021-03-14T22:11:40Z) - Continuous Doubly Constrained Batch Reinforcement Learning [93.23842221189658]
We propose an algorithm for batch RL, where effective policies are learned using only a fixed offline dataset instead of online interactions with the environment.
The limited data in batch RL produces inherent uncertainty in value estimates of states/actions that were insufficiently represented in the training data.
We propose to mitigate this issue via two straightforward penalties: a policy-constraint to reduce this divergence and a value-constraint that discourages overly optimistic estimates.
arXiv Detail & Related papers (2021-02-18T08:54:14Z) - Conservative Q-Learning for Offline Reinforcement Learning [106.05582605650932]
We show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return.
We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees.
arXiv Detail & Related papers (2020-06-08T17:53:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.