Leapfrog Diffusion Model for Stochastic Trajectory Prediction
- URL: http://arxiv.org/abs/2303.10895v1
- Date: Mon, 20 Mar 2023 06:32:48 GMT
- Title: Leapfrog Diffusion Model for Stochastic Trajectory Prediction
- Authors: Weibo Mao, Chenxin Xu, Qi Zhu, Siheng Chen, Yanfeng Wang
- Abstract summary: We present LEapfrog Diffusion model (LED), a novel diffusion-based trajectory prediction model.
LED provides real-time, precise, and diverse predictions.
LED consistently improves performance and achieves 23.7%/21.9% ADE/FDE improvement on NFL.
- Score: 32.36667797656046
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To model the indeterminacy of human behaviors, stochastic trajectory
prediction requires a sophisticated multi-modal distribution of future
trajectories. Emerging diffusion models have revealed their tremendous
representation capacities in numerous generation tasks, showing potential for
stochastic trajectory prediction. However, expensive time consumption prevents
diffusion models from real-time prediction, since a large number of denoising
steps are required to assure sufficient representation ability. To resolve the
dilemma, we present LEapfrog Diffusion model (LED), a novel diffusion-based
trajectory prediction model, which provides real-time, precise, and diverse
predictions. The core of the proposed LED is to leverage a trainable leapfrog
initializer to directly learn an expressive multi-modal distribution of future
trajectories, which skips a large number of denoising steps, significantly
accelerating inference speed. Moreover, the leapfrog initializer is trained to
appropriately allocate correlated samples to provide a diversity of predicted
future trajectories, significantly improving prediction performances. Extensive
experiments on four real-world datasets, including NBA/NFL/SDD/ETH-UCY, show
that LED consistently improves performance and achieves 23.7%/21.9% ADE/FDE
improvement on NFL. The proposed LED also speeds up the inference
19.3/30.8/24.3/25.1 times compared to the standard diffusion model on
NBA/NFL/SDD/ETH-UCY, satisfying real-time inference needs. Code is available at
https://github.com/MediaBrain-SJTU/LED.
Related papers
- Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion [61.03681839276652]
Diffusion Forcing is a new training paradigm where a diffusion model is trained to denoise a set of tokens with independent per-token noise levels.
We apply Diffusion Forcing to sequence generative modeling by training a causal next-token prediction model to generate one or several future tokens.
arXiv Detail & Related papers (2024-07-01T15:43:25Z) - ADM: Accelerated Diffusion Model via Estimated Priors for Robust Motion Prediction under Uncertainties [6.865435680843742]
We propose a novel diffusion-based, acceleratable framework that adeptly predicts future trajectories of agents with enhanced resistance to noise.
Our method meets the rigorous real-time operational standards essential for autonomous vehicles.
It achieves significant improvement in multi-agent motion prediction on the Argoverse 1 motion forecasting dataset.
arXiv Detail & Related papers (2024-05-01T18:16:55Z) - MG-TSD: Multi-Granularity Time Series Diffusion Models with Guided Learning Process [26.661721555671626]
We introduce a novel Multi-Granularity Time Series (MG-TSD) model, which achieves state-of-the-art predictive performance.
Our approach does not rely on additional external data, making it versatile and applicable across various domains.
arXiv Detail & Related papers (2024-03-09T01:15:03Z) - Self-Play Fine-Tuning of Diffusion Models for Text-to-Image Generation [59.184980778643464]
Fine-tuning Diffusion Models remains an underexplored frontier in generative artificial intelligence (GenAI)
In this paper, we introduce an innovative technique called self-play fine-tuning for diffusion models (SPIN-Diffusion)
Our approach offers an alternative to conventional supervised fine-tuning and RL strategies, significantly improving both model performance and alignment.
arXiv Detail & Related papers (2024-02-15T18:59:18Z) - Guided Diffusion from Self-Supervised Diffusion Features [49.78673164423208]
Guidance serves as a key concept in diffusion models, yet its effectiveness is often limited by the need for extra data annotation or pretraining.
We propose a framework to extract guidance from, and specifically for, diffusion models.
arXiv Detail & Related papers (2023-12-14T11:19:11Z) - GBD-TS: Goal-based Pedestrian Trajectory Prediction with Diffusion using
Tree Sampling Algorithm [18.367711156885203]
We propose a novel scene-aware multi-modal pedestrian trajectory prediction framework called GBD.
First, the goal predictor produces multiple goals, and then the diffusion network generates multi-modal trajectories conditioned on these goals.
arXiv Detail & Related papers (2023-11-25T03:55:06Z) - Non-autoregressive Conditional Diffusion Models for Time Series
Prediction [3.9722979176564763]
TimeDiff is a non-autoregressive diffusion model that achieves high-quality time series prediction.
We show that TimeDiff consistently outperforms existing time series diffusion models.
arXiv Detail & Related papers (2023-06-08T08:53:59Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - Stochastic Trajectory Prediction via Motion Indeterminacy Diffusion [88.45326906116165]
We present a new framework to formulate the trajectory prediction task as a reverse process of motion indeterminacy diffusion (MID)
We encode the history behavior information and the social interactions as a state embedding and devise a Transformer-based diffusion model to capture the temporal dependencies of trajectories.
Experiments on the human trajectory prediction benchmarks including the Stanford Drone and ETH/UCY datasets demonstrate the superiority of our method.
arXiv Detail & Related papers (2022-03-25T16:59:08Z) - Learning Interpretable Deep State Space Model for Probabilistic Time
Series Forecasting [98.57851612518758]
Probabilistic time series forecasting involves estimating the distribution of future based on its history.
We propose a deep state space model for probabilistic time series forecasting whereby the non-linear emission model and transition model are parameterized by networks.
We show in experiments that our model produces accurate and sharp probabilistic forecasts.
arXiv Detail & Related papers (2021-01-31T06:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.