Hierarchical Latent Structure for Multi-Modal Vehicle Trajectory
Forecasting
- URL: http://arxiv.org/abs/2207.04624v1
- Date: Mon, 11 Jul 2022 04:52:28 GMT
- Title: Hierarchical Latent Structure for Multi-Modal Vehicle Trajectory
Forecasting
- Authors: Dooseop Choi, KyoungWook Min
- Abstract summary: We introduce a hierarchical latent structure into a VAE-based trajectory forecasting model.
Our model is capable of generating clear multi-modal trajectory distributions and outperforms the state-of-the-art (SOTA) models in terms of prediction accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Variational autoencoder (VAE) has widely been utilized for modeling data
distributions because it is theoretically elegant, easy to train, and has nice
manifold representations. However, when applied to image reconstruction and
synthesis tasks, VAE shows the limitation that the generated sample tends to be
blurry. We observe that a similar problem, in which the generated trajectory is
located between adjacent lanes, often arises in VAE-based trajectory
forecasting models. To mitigate this problem, we introduce a hierarchical
latent structure into the VAE-based forecasting model. Based on the assumption
that the trajectory distribution can be approximated as a mixture of simple
distributions (or modes), the low-level latent variable is employed to model
each mode of the mixture and the high-level latent variable is employed to
represent the weights for the modes. To model each mode accurately, we
condition the low-level latent variable using two lane-level context vectors
computed in novel ways, one corresponds to vehicle-lane interaction and the
other to vehicle-vehicle interaction. The context vectors are also used to
model the weights via the proposed mode selection network. To evaluate our
forecasting model, we use two large-scale real-world datasets. Experimental
results show that our model is not only capable of generating clear multi-modal
trajectory distributions but also outperforms the state-of-the-art (SOTA)
models in terms of prediction accuracy. Our code is available at
https://github.com/d1024choi/HLSTrajForecast.
Related papers
- Flow Map Matching [15.520853806024943]
Flow map matching is an algorithm that learns the two-time flow map of an underlying ordinary differential equation.
We show that flow map matching leads to high-quality samples with significantly reduced sampling cost compared to diffusion or interpolant methods.
arXiv Detail & Related papers (2024-06-11T17:41:26Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Predictive Modeling in the Reservoir Kernel Motif Space [0.9217021281095907]
This work proposes a time series prediction method based on the kernel view of linear reservoirs.
We provide a geometric interpretation of our approach shedding light on how our approach is related to the core reservoir models.
Empirical experiments then compare predictive performances of our suggested model with those of recent state-of-art transformer based models.
arXiv Detail & Related papers (2024-05-11T16:12:25Z) - Trajeglish: Traffic Modeling as Next-Token Prediction [67.28197954427638]
A longstanding challenge for self-driving development is simulating dynamic driving scenarios seeded from recorded driving logs.
We apply tools from discrete sequence modeling to model how vehicles, pedestrians and cyclists interact in driving scenarios.
Our model tops the Sim Agents Benchmark, surpassing prior work along the realism meta metric by 3.3% and along the interaction metric by 9.9%.
arXiv Detail & Related papers (2023-12-07T18:53:27Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Oops I Took A Gradient: Scalable Sampling for Discrete Distributions [53.3142984019796]
We show that this approach outperforms generic samplers in a number of difficult settings.
We also demonstrate the use of our improved sampler for training deep energy-based models on high dimensional discrete data.
arXiv Detail & Related papers (2021-02-08T20:08:50Z) - Diverse Sampling for Normalizing Flow Based Trajectory Forecasting [34.01303881881315]
We propose Diversity Sampling for Flow (DSF) to improve the quality and diversity of trajectory samples from a pre-trained flow model.
DSF is easy to implement, and we show that it offers a simple plug-in improvement for several existing flow-based forecasting models.
arXiv Detail & Related papers (2020-11-30T18:23:29Z) - Haar Wavelet based Block Autoregressive Flows for Trajectories [129.37479472754083]
Prediction of trajectories such as that of pedestrians is crucial to the performance of autonomous agents.
We introduce a novel Haar wavelet based block autoregressive model leveraging split couplings.
We illustrate the advantages of our approach for generating diverse and accurate trajectories on two real-world datasets.
arXiv Detail & Related papers (2020-09-21T13:57:10Z) - Variational Mixture of Normalizing Flows [0.0]
Deep generative models, such as generative adversarial networks autociteGAN, variational autoencoders autocitevaepaper, and their variants, have seen wide adoption for the task of modelling complex data distributions.
Normalizing flows have overcome this limitation by leveraging the change-of-suchs formula for probability density functions.
The present work overcomes this by using normalizing flows as components in a mixture model and devising an end-to-end training procedure for such a model.
arXiv Detail & Related papers (2020-09-01T17:20:08Z) - Probabilistic Optimal Transport based on Collective Graphical Models [38.49457447599772]
Optimal Transport (OT) is a powerful tool for measuring the similarity between probability distributions and histograms.
We propose a new framework in which OT is considered as a maximum a posteriori (MAP) solution of a probabilistic generative model.
arXiv Detail & Related papers (2020-06-16T02:03:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.