Boosting Continuous Control with Consistency Policy
- URL: http://arxiv.org/abs/2310.06343v2
- Date: Wed, 24 Jan 2024 04:44:58 GMT
- Title: Boosting Continuous Control with Consistency Policy
- Authors: Yuhui Chen, Haoran Li, Dongbin Zhao
- Abstract summary: We propose a novel time-efficiency method named Consistency Policy with Q-Learning (CPQL)
By establishing a mapping from the reverse diffusion trajectories to the desired policy, we simultaneously address the issues of time efficiency and inaccurate guidance.
CPQL achieves new state-of-the-art performance on 11 offline and 21 online tasks, significantly improving inference speed by nearly 45 times compared to Diffusion-QL.
- Score: 14.78980095597872
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to its training stability and strong expression, the diffusion model has
attracted considerable attention in offline reinforcement learning. However,
several challenges have also come with it: 1) The demand for a large number of
diffusion steps makes the diffusion-model-based methods time inefficient and
limits their applications in real-time control; 2) How to achieve policy
improvement with accurate guidance for diffusion model-based policy is still an
open problem. Inspired by the consistency model, we propose a novel
time-efficiency method named Consistency Policy with Q-Learning (CPQL), which
derives action from noise by a single step. By establishing a mapping from the
reverse diffusion trajectories to the desired policy, we simultaneously address
the issues of time efficiency and inaccurate guidance when updating diffusion
model-based policy with the learned Q-function. We demonstrate that CPQL can
achieve policy improvement with accurate guidance for offline reinforcement
learning, and can be seamlessly extended for online RL tasks. Experimental
results indicate that CPQL achieves new state-of-the-art performance on 11
offline and 21 online tasks, significantly improving inference speed by nearly
45 times compared to Diffusion-QL. We will release our code later.
Related papers
- Diffusion Policies creating a Trust Region for Offline Reinforcement Learning [66.17291150498276]
We introduce a dual policy approach, Diffusion Trusted Q-Learning (DTQL), which comprises a diffusion policy for pure behavior cloning and a practical one-step policy.
DTQL eliminates the need for iterative denoising sampling during both training and inference, making it remarkably computationally efficient.
We show that DTQL could not only outperform other methods on the majority of the D4RL benchmark tasks but also demonstrate efficiency in training and inference speeds.
arXiv Detail & Related papers (2024-05-30T05:04:33Z) - Diffusion-based Reinforcement Learning via Q-weighted Variational Policy Optimization [55.97310586039358]
Diffusion models have garnered widespread attention in Reinforcement Learning (RL) for their powerful expressiveness and multimodality.
We propose a novel model-free diffusion-based online RL algorithm, Q-weighted Variational Policy Optimization (QVPO)
Specifically, we introduce the Q-weighted variational loss, which can be proved to be a tight lower bound of the policy objective in online RL under certain conditions.
We also develop an efficient behavior policy to enhance sample efficiency by reducing the variance of the diffusion policy during online interactions.
arXiv Detail & Related papers (2024-05-25T10:45:46Z) - Projected Off-Policy Q-Learning (POP-QL) for Stabilizing Offline
Reinforcement Learning [57.83919813698673]
Projected Off-Policy Q-Learning (POP-QL) is a novel actor-critic algorithm that simultaneously reweights off-policy samples and constrains the policy to prevent divergence and reduce value-approximation error.
In our experiments, POP-QL not only shows competitive performance on standard benchmarks, but also out-performs competing methods in tasks where the data-collection policy is significantly sub-optimal.
arXiv Detail & Related papers (2023-11-25T00:30:58Z) - Score Regularized Policy Optimization through Diffusion Behavior [25.926641622408752]
Recent developments in offline reinforcement learning have uncovered the immense potential of diffusion modeling.
We propose to extract an efficient deterministic inference policy from critic models and pretrained diffusion behavior models.
Our method boosts action sampling speed by more than 25 times compared with various leading diffusion-based methods in locomotion tasks.
arXiv Detail & Related papers (2023-10-11T08:31:26Z) - Efficient Diffusion Policies for Offline Reinforcement Learning [85.73757789282212]
Diffsuion-QL significantly boosts the performance of offline RL by representing a policy with a diffusion model.
We propose efficient diffusion policy (EDP) to overcome these two challenges.
EDP constructs actions from corrupted ones at training to avoid running the sampling chain.
arXiv Detail & Related papers (2023-05-31T17:55:21Z) - Diffusion Policies as an Expressive Policy Class for Offline
Reinforcement Learning [70.20191211010847]
Offline reinforcement learning (RL) aims to learn an optimal policy using a previously collected static dataset.
We introduce Diffusion Q-learning (Diffusion-QL) that utilizes a conditional diffusion model to represent the policy.
We show that our method can achieve state-of-the-art performance on the majority of the D4RL benchmark tasks.
arXiv Detail & Related papers (2022-08-12T09:54:11Z) - Conservative Q-Learning for Offline Reinforcement Learning [106.05582605650932]
We show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return.
We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees.
arXiv Detail & Related papers (2020-06-08T17:53:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.