MetaDiffuser: Diffusion Model as Conditional Planner for Offline Meta-RL
- URL: http://arxiv.org/abs/2305.19923v1
- Date: Wed, 31 May 2023 15:01:38 GMT
- Title: MetaDiffuser: Diffusion Model as Conditional Planner for Offline Meta-RL
- Authors: Fei Ni, Jianye Hao, Yao Mu, Yifu Yuan, Yan Zheng, Bin Wang, Zhixuan
Liang
- Abstract summary: We propose a task-oriented conditioned diffusion planner for offline meta-RL(MetaDiffuser)
The proposed framework enjoys the robustness to the quality of collected warm-start data from the testing task.
Experiment results on MuJoCo benchmarks show that MetaDiffuser outperforms other strong offline meta-RL baselines.
- Score: 25.76141096396645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, diffusion model shines as a promising backbone for the sequence
modeling paradigm in offline reinforcement learning(RL). However, these works
mostly lack the generalization ability across tasks with reward or dynamics
change. To tackle this challenge, in this paper we propose a task-oriented
conditioned diffusion planner for offline meta-RL(MetaDiffuser), which
considers the generalization problem as conditional trajectory generation task
with contextual representation. The key is to learn a context conditioned
diffusion model which can generate task-oriented trajectories for planning
across diverse tasks. To enhance the dynamics consistency of the generated
trajectories while encouraging trajectories to achieve high returns, we further
design a dual-guided module in the sampling process of the diffusion model. The
proposed framework enjoys the robustness to the quality of collected warm-start
data from the testing task and the flexibility to incorporate with different
task representation method. The experiment results on MuJoCo benchmarks show
that MetaDiffuser outperforms other strong offline meta-RL baselines,
demonstrating the outstanding conditional generation ability of diffusion
architecture.
Related papers
- Off-dynamics Conditional Diffusion Planners [15.321049697197447]
This work explores the use of more readily available, albeit off-dynamics datasets, to address the challenge of data scarcity in Offline RL.
We propose a novel approach using conditional Diffusion Probabilistic Models (DPMs) to learn the joint distribution of the large-scale off-dynamics dataset and the limited target dataset.
arXiv Detail & Related papers (2024-10-16T04:56:43Z) - Meta-DT: Offline Meta-RL as Conditional Sequence Modeling with World Model Disentanglement [41.7426496795769]
We propose Meta Decision Transformer (Meta-DT) to achieve efficient generalization in offline meta-RL.
We pretrain a context-aware world model to learn a compact task representation, and inject it as a contextual condition to guide task-oriented sequence generation.
We show that Meta-DT exhibits superior few and zero-shot generalization capacity compared to strong baselines.
arXiv Detail & Related papers (2024-10-15T09:51:30Z) - DIAR: Diffusion-model-guided Implicit Q-learning with Adaptive Revaluation [10.645244994430483]
We propose a novel offline reinforcement learning (offline RL) approach, introducing the Diffusion-model-guided Implicit Q-learning with Adaptive Revaluation framework.
We leverage diffusion models to learn state-action sequence distributions and incorporate value functions for more balanced and adaptive decision-making.
As demonstrated in tasks like Maze2D, AntMaze, and Kitchen, DIAR consistently outperforms state-of-the-art algorithms in long-horizon, sparse-reward environments.
arXiv Detail & Related papers (2024-10-15T07:09:56Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Entropy-Regularized Token-Level Policy Optimization for Language Agent Reinforcement [67.1393112206885]
Large Language Models (LLMs) have shown promise as intelligent agents in interactive decision-making tasks.
We introduce Entropy-Regularized Token-level Policy Optimization (ETPO), an entropy-augmented RL method tailored for optimizing LLMs at the token level.
We assess the effectiveness of ETPO within a simulated environment that models data science code generation as a series of multi-step interactive tasks.
arXiv Detail & Related papers (2024-02-09T07:45:26Z) - Phasic Content Fusing Diffusion Model with Directional Distribution
Consistency for Few-Shot Model Adaption [73.98706049140098]
We propose a novel phasic content fusing few-shot diffusion model with directional distribution consistency loss.
Specifically, we design a phasic training strategy with phasic content fusion to help our model learn content and style information when t is large.
Finally, we propose a cross-domain structure guidance strategy that enhances structure consistency during domain adaptation.
arXiv Detail & Related papers (2023-09-07T14:14:11Z) - Model-Based Reinforcement Learning with Multi-Task Offline Pretraining [59.82457030180094]
We present a model-based RL method that learns to transfer potentially useful dynamics and action demonstrations from offline data to a novel task.
The main idea is to use the world models not only as simulators for behavior learning but also as tools to measure the task relevance.
We demonstrate the advantages of our approach compared with the state-of-the-art methods in Meta-World and DeepMind Control Suite.
arXiv Detail & Related papers (2023-06-06T02:24:41Z) - Diffusion Model is an Effective Planner and Data Synthesizer for
Multi-Task Reinforcement Learning [101.66860222415512]
Multi-Task Diffusion Model (textscMTDiff) is a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis.
For generative planning, we find textscMTDiff outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D.
arXiv Detail & Related papers (2023-05-29T05:20:38Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Dynamic Channel Access via Meta-Reinforcement Learning [0.8223798883838329]
We propose a meta-DRL framework that incorporates the method of Model-Agnostic Meta-Learning (MAML)
We show that only a few gradient descents are required for adapting to different tasks drawn from the same distribution.
arXiv Detail & Related papers (2021-12-24T15:04:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.