Diffusion Models for Reinforcement Learning: A Survey
- URL: http://arxiv.org/abs/2311.01223v4
- Date: Fri, 23 Feb 2024 14:42:57 GMT
- Title: Diffusion Models for Reinforcement Learning: A Survey
- Authors: Zhengbang Zhu, Hanye Zhao, Haoran He, Yichao Zhong, Shenyu Zhang,
Haoquan Guo, Tingting Chen, Weinan Zhang
- Abstract summary: Diffusion models surpass previous generative models in sample quality and training stability.
Recent works have shown the advantages of diffusion models in improving reinforcement learning (RL) solutions.
This survey aims to provide an overview of this emerging field and hopes to inspire new avenues of research.
- Score: 22.670096541841325
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Diffusion models surpass previous generative models in sample quality and
training stability. Recent works have shown the advantages of diffusion models
in improving reinforcement learning (RL) solutions. This survey aims to provide
an overview of this emerging field and hopes to inspire new avenues of
research. First, we examine several challenges encountered by RL algorithms.
Then, we present a taxonomy of existing methods based on the roles of diffusion
models in RL and explore how the preceding challenges are addressed. We further
outline successful applications of diffusion models in various RL-related
tasks. Finally, we conclude the survey and offer insights into future research
directions. We are actively maintaining a GitHub repository for papers and
other related resources in utilizing diffusion models in RL:
https://github.com/apexrl/Diff4RLSurvey.
Related papers
- Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [63.31328039424469]
This tutorial provides a comprehensive survey of methods for fine-tuning diffusion models to optimize downstream reward functions.
We explain the application of various RL algorithms, including PPO, differentiable optimization, reward-weighted MLE, value-weighted sampling, and path consistency learning.
arXiv Detail & Related papers (2024-07-18T17:35:32Z) - Diffusion Models in Low-Level Vision: A Survey [82.77962165415153]
diffusion model-based solutions have emerged as widely acclaimed for their ability to produce samples of superior quality and diversity.
We present three generic diffusion modeling frameworks and explore their correlations with other deep generative models.
We summarize extended diffusion models applied in other tasks, including medical, remote sensing, and video scenarios.
arXiv Detail & Related papers (2024-06-17T01:49:27Z) - Learning Diffusion Priors from Observations by Expectation Maximization [6.224769485481242]
We present a novel expectation-maximization algorithm for training diffusion models from incomplete and noisy observations only.
As part of our method, we propose and motivate a new posterior sampling scheme for unconditional diffusion models.
arXiv Detail & Related papers (2024-05-22T15:04:06Z) - Theoretical research on generative diffusion models: an overview [0.0]
Generative diffusion models showed high success in many fields with a powerful theoretical background.
They convert the data distribution to noise and remove the noise back to obtain a similar distribution.
arXiv Detail & Related papers (2024-04-13T14:08:56Z) - An Overview of Diffusion Models: Applications, Guided Generation, Statistical Rates and Optimization [59.63880337156392]
Diffusion models have achieved tremendous success in computer vision, audio, reinforcement learning, and computational biology.
Despite the significant empirical success, theory of diffusion models is very limited.
This paper provides a well-rounded theoretical exposure for stimulating forward-looking theories and methods of diffusion models.
arXiv Detail & Related papers (2024-04-11T14:07:25Z) - Diffusion-based Graph Generative Methods [51.04666253001781]
We systematically and comprehensively review on diffusion-based graph generative methods.
We first make a review on three mainstream paradigms of diffusion methods, which are denoising diffusion models, score-based genrative models, and differential equations.
In the end, we point out some limitations of current studies and future directions of future explorations.
arXiv Detail & Related papers (2024-01-28T10:09:05Z) - Guided Diffusion from Self-Supervised Diffusion Features [49.78673164423208]
Guidance serves as a key concept in diffusion models, yet its effectiveness is often limited by the need for extra data annotation or pretraining.
We propose a framework to extract guidance from, and specifically for, diffusion models.
arXiv Detail & Related papers (2023-12-14T11:19:11Z) - Diffusion Models for Time Series Applications: A Survey [23.003273147019446]
Diffusion models are used in image, video, and text synthesis nowadays.
We focus on diffusion-based methods for time series forecasting, imputation, and generation.
We conclude the common limitation of diffusion-based methods and highlight potential future research directions.
arXiv Detail & Related papers (2023-05-01T02:06:46Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.