Efficient Diffusion Policies for Offline Reinforcement Learning
- URL: http://arxiv.org/abs/2305.20081v2
- Date: Thu, 26 Oct 2023 12:25:02 GMT
- Title: Efficient Diffusion Policies for Offline Reinforcement Learning
- Authors: Bingyi Kang, Xiao Ma, Chao Du, Tianyu Pang, Shuicheng Yan
- Abstract summary: Diffsuion-QL significantly boosts the performance of offline RL by representing a policy with a diffusion model.
We propose efficient diffusion policy (EDP) to overcome these two challenges.
EDP constructs actions from corrupted ones at training to avoid running the sampling chain.
- Score: 85.73757789282212
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Offline reinforcement learning (RL) aims to learn optimal policies from
offline datasets, where the parameterization of policies is crucial but often
overlooked. Recently, Diffsuion-QL significantly boosts the performance of
offline RL by representing a policy with a diffusion model, whose success
relies on a parametrized Markov Chain with hundreds of steps for sampling.
However, Diffusion-QL suffers from two critical limitations. 1) It is
computationally inefficient to forward and backward through the whole Markov
chain during training. 2) It is incompatible with maximum likelihood-based RL
algorithms (e.g., policy gradient methods) as the likelihood of diffusion
models is intractable. Therefore, we propose efficient diffusion policy (EDP)
to overcome these two challenges. EDP approximately constructs actions from
corrupted ones at training to avoid running the sampling chain. We conduct
extensive experiments on the D4RL benchmark. The results show that EDP can
reduce the diffusion policy training time from 5 days to 5 hours on
gym-locomotion tasks. Moreover, we show that EDP is compatible with various
offline RL algorithms (TD3, CRR, and IQL) and achieves new state-of-the-art on
D4RL by large margins over previous methods. Our code is available at
https://github.com/sail-sg/edp.
Related papers
- Flow Q-Learning [61.60383927357656]
We present flow Q-learning (FQL), a simple and performant offline reinforcement learning (RL) method.
FQL trains an expressive one-step policy with RL, rather than directly guiding an iterative flow policy to maximize values.
We experimentally show that FQL leads to strong performance across 73 challenging state- and pixel-based OGBench and D4RL tasks.
arXiv Detail & Related papers (2025-02-04T18:04:05Z) - Policy Agnostic RL: Offline RL and Online RL Fine-Tuning of Any Class and Backbone [72.17534881026995]
We develop an offline and online fine-tuning approach called policy-agnostic RL (PA-RL)
We show the first result that successfully fine-tunes OpenVLA, a 7B generalist robot policy, autonomously with Cal-QL, an online RL fine-tuning algorithm.
arXiv Detail & Related papers (2024-12-09T17:28:03Z) - Diffusion Policies creating a Trust Region for Offline Reinforcement Learning [66.17291150498276]
We introduce a dual policy approach, Diffusion Trusted Q-Learning (DTQL), which comprises a diffusion policy for pure behavior cloning and a practical one-step policy.
DTQL eliminates the need for iterative denoising sampling during both training and inference, making it remarkably computationally efficient.
We show that DTQL could not only outperform other methods on the majority of the D4RL benchmark tasks but also demonstrate efficiency in training and inference speeds.
arXiv Detail & Related papers (2024-05-30T05:04:33Z) - DiffCPS: Diffusion Model based Constrained Policy Search for Offline
Reinforcement Learning [11.678012836760967]
Constrained policy search is a fundamental problem in offline reinforcement learning.
We propose a novel approach, $textbfDiffusion-based Constrained Policy Search$ (dubbed DiffCPS)
arXiv Detail & Related papers (2023-10-09T01:29:17Z) - Dual RL: Unification and New Methods for Reinforcement and Imitation
Learning [26.59374102005998]
We first cast several state-of-the-art offline RL and offline imitation learning (IL) algorithms as instances of dual RL approaches with shared structures.
We propose a new discriminator-free method ReCOIL that learns to imitate from arbitrary off-policy data to obtain near-expert performance.
For offline RL, our analysis frames a recent offline RL method XQL in the dual framework, and we further propose a new method f-DVL that provides alternative choices to the Gumbel regression loss.
arXiv Detail & Related papers (2023-02-16T20:10:06Z) - Diffusion Policies as an Expressive Policy Class for Offline
Reinforcement Learning [70.20191211010847]
Offline reinforcement learning (RL) aims to learn an optimal policy using a previously collected static dataset.
We introduce Diffusion Q-learning (Diffusion-QL) that utilizes a conditional diffusion model to represent the policy.
We show that our method can achieve state-of-the-art performance on the majority of the D4RL benchmark tasks.
arXiv Detail & Related papers (2022-08-12T09:54:11Z) - MOPO: Model-based Offline Policy Optimization [183.6449600580806]
offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data.
We show that an existing model-based RL algorithm already produces significant gains in the offline setting.
We propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics.
arXiv Detail & Related papers (2020-05-27T08:46:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.