Efficient Diffusion Policies for Offline Reinforcement Learning
- URL: http://arxiv.org/abs/2305.20081v2
- Date: Thu, 26 Oct 2023 12:25:02 GMT
- Title: Efficient Diffusion Policies for Offline Reinforcement Learning
- Authors: Bingyi Kang, Xiao Ma, Chao Du, Tianyu Pang, Shuicheng Yan
- Abstract summary: Diffsuion-QL significantly boosts the performance of offline RL by representing a policy with a diffusion model.
We propose efficient diffusion policy (EDP) to overcome these two challenges.
EDP constructs actions from corrupted ones at training to avoid running the sampling chain.
- Score: 85.73757789282212
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Offline reinforcement learning (RL) aims to learn optimal policies from
offline datasets, where the parameterization of policies is crucial but often
overlooked. Recently, Diffsuion-QL significantly boosts the performance of
offline RL by representing a policy with a diffusion model, whose success
relies on a parametrized Markov Chain with hundreds of steps for sampling.
However, Diffusion-QL suffers from two critical limitations. 1) It is
computationally inefficient to forward and backward through the whole Markov
chain during training. 2) It is incompatible with maximum likelihood-based RL
algorithms (e.g., policy gradient methods) as the likelihood of diffusion
models is intractable. Therefore, we propose efficient diffusion policy (EDP)
to overcome these two challenges. EDP approximately constructs actions from
corrupted ones at training to avoid running the sampling chain. We conduct
extensive experiments on the D4RL benchmark. The results show that EDP can
reduce the diffusion policy training time from 5 days to 5 hours on
gym-locomotion tasks. Moreover, we show that EDP is compatible with various
offline RL algorithms (TD3, CRR, and IQL) and achieves new state-of-the-art on
D4RL by large margins over previous methods. Our code is available at
https://github.com/sail-sg/edp.
Related papers
- Diffusion Policies creating a Trust Region for Offline Reinforcement Learning [66.17291150498276]
We introduce a dual policy approach, Diffusion Trusted Q-Learning (DTQL), which comprises a diffusion policy for pure behavior cloning and a practical one-step policy.
DTQL eliminates the need for iterative denoising sampling during both training and inference, making it remarkably computationally efficient.
We show that DTQL could not only outperform other methods on the majority of the D4RL benchmark tasks but also demonstrate efficiency in training and inference speeds.
arXiv Detail & Related papers (2024-05-30T05:04:33Z) - DiffCPS: Diffusion Model based Constrained Policy Search for Offline
Reinforcement Learning [11.678012836760967]
Constrained policy search is a fundamental problem in offline reinforcement learning.
We propose a novel approach, $textbfDiffusion-based Constrained Policy Search$ (dubbed DiffCPS)
arXiv Detail & Related papers (2023-10-09T01:29:17Z) - Behavior Proximal Policy Optimization [14.701955559885615]
offline reinforcement learning (RL) is a challenging setting where existing off-policy actor-critic methods perform poorly.
Online on-policy algorithms are naturally able to solve offline RL.
We propose Behavior Proximal Policy Optimization (BPPO), which solves offline RL without any extra constraint or regularization.
arXiv Detail & Related papers (2023-02-22T11:49:12Z) - Dual RL: Unification and New Methods for Reinforcement and Imitation
Learning [26.59374102005998]
We first cast several state-of-the-art offline RL and offline imitation learning (IL) algorithms as instances of dual RL approaches with shared structures.
We propose a new discriminator-free method ReCOIL that learns to imitate from arbitrary off-policy data to obtain near-expert performance.
For offline RL, our analysis frames a recent offline RL method XQL in the dual framework, and we further propose a new method f-DVL that provides alternative choices to the Gumbel regression loss.
arXiv Detail & Related papers (2023-02-16T20:10:06Z) - Offline Policy Optimization in RL with Variance Regularizaton [142.87345258222942]
We propose variance regularization for offline RL algorithms, using stationary distribution corrections.
We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer.
The proposed algorithm for offline variance regularization (OVAR) can be used to augment any existing offline policy optimization algorithms.
arXiv Detail & Related papers (2022-12-29T18:25:01Z) - Boosting Offline Reinforcement Learning via Data Rebalancing [104.3767045977716]
offline reinforcement learning (RL) is challenged by the distributional shift between learning policies and datasets.
We propose a simple yet effective method to boost offline RL algorithms based on the observation that resampling a dataset keeps the distribution support unchanged.
We dub our method ReD (Return-based Data Rebalance), which can be implemented with less than 10 lines of code change and adds negligible running time.
arXiv Detail & Related papers (2022-10-17T16:34:01Z) - Diffusion Policies as an Expressive Policy Class for Offline
Reinforcement Learning [70.20191211010847]
Offline reinforcement learning (RL) aims to learn an optimal policy using a previously collected static dataset.
We introduce Diffusion Q-learning (Diffusion-QL) that utilizes a conditional diffusion model to represent the policy.
We show that our method can achieve state-of-the-art performance on the majority of the D4RL benchmark tasks.
arXiv Detail & Related papers (2022-08-12T09:54:11Z) - MOPO: Model-based Offline Policy Optimization [183.6449600580806]
offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data.
We show that an existing model-based RL algorithm already produces significant gains in the offline setting.
We propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics.
arXiv Detail & Related papers (2020-05-27T08:46:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.