Enhancing Spatiotemporal Prediction Model using Modular Design and
Beyond
- URL: http://arxiv.org/abs/2210.01500v1
- Date: Tue, 4 Oct 2022 10:09:35 GMT
- Title: Enhancing Spatiotemporal Prediction Model using Modular Design and
Beyond
- Authors: Haoyu Pan, Hao Wu, Tan Yang
- Abstract summary: It is challenging to predict sequence varies both in time and space.
The mainstream method is to model and spatial temporal structures at the same time.
A modular design is proposed, which embeds sequence model into two modules: a spatial encoder-decoder and a predictor.
- Score: 2.323220706791067
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predictive learning uses a known state to generate a future state over a
period of time. It is a challenging task to predict spatiotemporal sequence
because the spatiotemporal sequence varies both in time and space. The
mainstream method is to model spatial and temporal structures at the same time
using RNN-based or transformer-based architecture, and then generates future
data by using learned experience in the way of auto-regressive. The method of
learning spatial and temporal features simultaneously brings a lot of
parameters to the model, which makes the model difficult to be convergent. In
this paper, a modular design is proposed, which decomposes spatiotemporal
sequence model into two modules: a spatial encoder-decoder and a predictor.
These two modules can extract spatial features and predict future data
respectively. The spatial encoder-decoder maps the data into a latent embedding
space and generates data from the latent space while the predictor forecasts
future embedding from past. By applying the design to the current research and
performing experiments on KTH-Action and MovingMNIST datasets, we both improve
computational performance and obtain state-of-the-art results.
Related papers
- Spatial-Temporal Large Language Model for Traffic Prediction [21.69991612610926]
We propose a Spatial-Temporal Large Language Model (ST-LLM) for traffic prediction.
In the ST-LLM, we define timesteps at each location as tokens and design a spatial-temporal embedding to learn the spatial location and global temporal patterns of these tokens.
In experiments on real traffic datasets, ST-LLM is a powerful spatial-temporal learner that outperforms state-of-the-art models.
arXiv Detail & Related papers (2024-01-18T17:03:59Z) - Generative Modeling with Phase Stochastic Bridges [49.4474628881673]
Diffusion models (DMs) represent state-of-the-art generative models for continuous inputs.
We introduce a novel generative modeling framework grounded in textbfphase space dynamics
Our framework demonstrates the capability to generate realistic data points at an early stage of dynamics propagation.
arXiv Detail & Related papers (2023-10-11T18:38:28Z) - ARFA: An Asymmetric Receptive Field Autoencoder Model for Spatiotemporal
Prediction [55.30913411696375]
We propose an Asymmetric Receptive Field Autoencoder (ARFA) model, which introduces corresponding sizes of receptive field modules.
In the encoder, we present large kernel module for globaltemporal feature extraction. In the decoder, we develop a small kernel module for localtemporal reconstruction.
We construct the RainBench, a large-scale radar echo dataset for precipitation prediction, to address the scarcity of meteorological data in the domain.
arXiv Detail & Related papers (2023-09-01T07:55:53Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Deep Latent State Space Models for Time-Series Generation [68.45746489575032]
We propose LS4, a generative model for sequences with latent variables evolving according to a state space ODE.
Inspired by recent deep state space models (S4), we achieve speedups by leveraging a convolutional representation of LS4.
We show that LS4 significantly outperforms previous continuous-time generative models in terms of marginal distribution, classification, and prediction scores on real-world datasets.
arXiv Detail & Related papers (2022-12-24T15:17:42Z) - Discovering Dynamic Patterns from Spatiotemporal Data with Time-Varying
Low-Rank Autoregression [12.923271427789267]
We develop a time-reduced-rank vector autoregression model whose coefficient are parameterized by low-rank tensor factorization.
In the temporal context, the complex time-varying system behaviors can be revealed by the temporal modes in the proposed model.
arXiv Detail & Related papers (2022-11-28T15:59:52Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - GTrans: Spatiotemporal Autoregressive Transformer with Graph Embeddings
for Nowcasting Extreme Events [5.672898304129217]
This paper proposes atemporal model, namely GTrans, that transforms data features into graph embeddings and predicts temporal dynamics with a transformer model.
According to our experiments, we demonstrate that GTrans can model spatial and temporal dynamics and nowcasts extreme events for datasets.
arXiv Detail & Related papers (2022-01-18T03:26:24Z) - Simple Video Generation using Neural ODEs [9.303957136142293]
We learn latent variable models that predict the future in latent space and project back to pixels.
We show that our approach yields promising results in the task of future frame prediction on the Moving MNIST dataset with 1 and 2 digits.
arXiv Detail & Related papers (2021-09-07T19:03:33Z) - GraphTCN: Spatio-Temporal Interaction Modeling for Human Trajectory
Prediction [5.346782918364054]
We propose a novel CNN-based spatial-temporal graph framework GraphCNT to support more efficient and accurate trajectory predictions.
In contrast to conventional models, both the spatial and temporal modeling of our model are computed within each local time window.
Our model achieves better performance in terms of both efficiency and accuracy as compared with state-of-the-art models on various trajectory prediction benchmark datasets.
arXiv Detail & Related papers (2020-03-16T12:56:12Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.