ARFA: An Asymmetric Receptive Field Autoencoder Model for Spatiotemporal
Prediction
- URL: http://arxiv.org/abs/2309.00314v2
- Date: Mon, 8 Jan 2024 14:57:36 GMT
- Title: ARFA: An Asymmetric Receptive Field Autoencoder Model for Spatiotemporal
Prediction
- Authors: Wenxuan Zhang, Xuechao Zou, Li Wu, Xiaoying Wang, Jianqiang Huang,
Junliang Xing
- Abstract summary: We propose an Asymmetric Receptive Field Autoencoder (ARFA) model, which introduces corresponding sizes of receptive field modules.
In the encoder, we present large kernel module for globaltemporal feature extraction. In the decoder, we develop a small kernel module for localtemporal reconstruction.
We construct the RainBench, a large-scale radar echo dataset for precipitation prediction, to address the scarcity of meteorological data in the domain.
- Score: 55.30913411696375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatiotemporal prediction aims to generate future sequences by paradigms
learned from historical contexts. It is essential in numerous domains, such as
traffic flow prediction and weather forecasting. Recently, research in this
field has been predominantly driven by deep neural networks based on
autoencoder architectures. However, existing methods commonly adopt autoencoder
architectures with identical receptive field sizes. To address this issue, we
propose an Asymmetric Receptive Field Autoencoder (ARFA) model, which
introduces corresponding sizes of receptive field modules tailored to the
distinct functionalities of the encoder and decoder. In the encoder, we present
a large kernel module for global spatiotemporal feature extraction. In the
decoder, we develop a small kernel module for local spatiotemporal information
reconstruction. Experimental results demonstrate that ARFA consistently
achieves state-of-the-art performance on popular datasets. Additionally, we
construct the RainBench, a large-scale radar echo dataset for precipitation
prediction, to address the scarcity of meteorological data in the domain.
Related papers
- AstroMAE: Redshift Prediction Using a Masked Autoencoder with a Novel Fine-Tuning Architecture [0.6906005491572401]
We introduce AstroMAE, an innovative approach that pretrains a vision transformer encoder using a masked autoencoder method.
This technique enables the encoder to capture the global patterns within the data without relying on labels.
We evaluate our model against various vision transformer architectures and CNN-based models.
arXiv Detail & Related papers (2024-09-03T12:12:37Z) - Generalizing Weather Forecast to Fine-grained Temporal Scales via Physics-AI Hybrid Modeling [55.13352174687475]
This paper proposes a physics-AI hybrid model (i.e., WeatherGFT) which Generalizes weather forecasts to Finer-grained Temporal scales.
Specifically, we employ a carefully designed PDE kernel to simulate physical evolution on a small time scale.
We introduce a lead time-aware training framework to promote the generalization of the model at different lead times.
arXiv Detail & Related papers (2024-05-22T16:21:02Z) - DIRESA, a distance-preserving nonlinear dimension reduction technique based on regularized autoencoders [0.0]
In meteorology, finding similar weather patterns or analogs in historical datasets can be useful for data assimilation, forecasting, and postprocessing.
In climate science, analogs in historical and climate projection data are used for attribution and impact studies.
We propose a dimension reduction technique based on autoencoder (AE) neural networks to compress those datasets and perform the search in an interpretable, compressed latent space.
arXiv Detail & Related papers (2024-04-28T20:54:57Z) - Enhancing Spatiotemporal Prediction Model using Modular Design and
Beyond [2.323220706791067]
It is challenging to predict sequence varies both in time and space.
The mainstream method is to model and spatial temporal structures at the same time.
A modular design is proposed, which embeds sequence model into two modules: a spatial encoder-decoder and a predictor.
arXiv Detail & Related papers (2022-10-04T10:09:35Z) - Towards Generating Real-World Time Series Data [52.51620668470388]
We propose a novel generative framework for time series data generation - RTSGAN.
RTSGAN learns an encoder-decoder module which provides a mapping between a time series instance and a fixed-dimension latent vector.
To generate time series with missing values, we further equip RTSGAN with an observation embedding layer and a decide-and-generate decoder.
arXiv Detail & Related papers (2021-11-16T11:31:37Z) - Spatiotemporal Weather Data Predictions with Shortcut
Recurrent-Convolutional Networks: A Solution for the Weather4cast challenge [0.0]
This paper presents the neural network model that was used by the author in the Weather4cast 2021 Challenge Stage 1.
The objective was to predict the time evolution of satellite-based weather data images.
The network is based on an encoder-forecaster architecture making use of gated recurrent units (GRU), residual blocks and a contracting/expanding architecture with shortcuts similar to U-Net.
arXiv Detail & Related papers (2021-11-03T10:36:47Z) - Deep Autoregressive Models with Spectral Attention [74.08846528440024]
We propose a forecasting architecture that combines deep autoregressive models with a Spectral Attention (SA) module.
By characterizing in the spectral domain the embedding of the time series as occurrences of a random process, our method can identify global trends and seasonality patterns.
Two spectral attention models, global and local to the time series, integrate this information within the forecast and perform spectral filtering to remove time series's noise.
arXiv Detail & Related papers (2021-07-13T11:08:47Z) - Learning Spatio-Temporal Transformer for Visual Tracking [108.11680070733598]
We present a new tracking architecture with an encoder-decoder transformer as the key component.
The whole method is end-to-end, does not need any postprocessing steps such as cosine window and bounding box smoothing.
The proposed tracker achieves state-of-the-art performance on five challenging short-term and long-term benchmarks, while running real-time speed, being 6x faster than Siam R-CNN.
arXiv Detail & Related papers (2021-03-31T15:19:19Z) - Numerical Weather Forecasting using Convolutional-LSTM with Attention
and Context Matcher Mechanisms [10.759556555869798]
We introduce a novel deep learning architecture for forecasting high-resolution weather data.
Our Weather Model achieves significant performance improvements compared to baseline deep learning models.
arXiv Detail & Related papers (2021-02-01T08:30:42Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.