A General ReLearner: Empowering Spatiotemporal Prediction by Re-learning Input-label Residual
- URL: http://arxiv.org/abs/2602.02563v1
- Date: Fri, 30 Jan 2026 17:26:57 GMT
- Title: A General ReLearner: Empowering Spatiotemporal Prediction by Re-learning Input-label Residual
- Authors: Jiaming Ma, Binwu Wang, Pengkun Wang, Xu Wang, Zhengyang Zhou, Yang Wang,
- Abstract summary: ReLearner is a module that augments SNNs with a bidirectional learning capability via an inverse learning process.<n>We show that ReLearner significantly enhances the predictive performance of existing STNNs.
- Score: 29.391374009383355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prevailing spatiotemporal prediction models typically operate under a forward (unidirectional) learning paradigm, in which models extract spatiotemporal features from historical observation input and map them to target spatiotemporal space for future forecasting (label). However, these models frequently exhibit suboptimal performance when spatiotemporal discrepancies exist between inputs and labels, for instance, when nodes with similar time-series inputs manifest distinct future labels, or vice versa. To address this limitation, we propose explicitly incorporating label features during the training phase. Specifically, we introduce the Spatiotemporal Residual Theorem, which generalizes the conventional unidirectional spatiotemporal prediction paradigm into a bidirectional learning framework. Building upon this theoretical foundation, we design an universal module, termed ReLearner, which seamlessly augments Spatiotemporal Neural Networks (STNNs) with a bidirectional learning capability via an auxiliary inverse learning process. In this process, the model relearns the spatiotemporal feature residuals between input data and future data. The proposed ReLearner comprises two critical components: (1) a Residual Learning Module, designed to effectively disentangle spatiotemporal feature discrepancies between input and label representations; and (2) a Residual Smoothing Module, employed to smooth residual terms and facilitate stable convergence. Extensive experiments conducted on 11 real-world datasets across 14 backbone models demonstrate that ReLearner significantly enhances the predictive performance of existing STNNs.Our code is available on GitHub.
Related papers
- How Different from the Past? Spatio-Temporal Time Series Forecasting with Self-Supervised Deviation Learning [15.102926671713668]
We propose ST-SSDL, a Spatio-Temporal series time forecasting framework.<n>It discretizes latent space using learnable prototypes that represent typicaltemporal patterns.<n>Experiments show that ST-SSDL consistently outperforms state-of-the-art baselines across multiple metrics.
arXiv Detail & Related papers (2025-10-06T15:21:13Z) - PredNext: Explicit Cross-View Temporal Prediction for Unsupervised Learning in Spiking Neural Networks [70.1286354746363]
Spiking Neural Networks (SNNs) offer a natural platform for unsupervised representation learning.<n>Current unsupervised SNNs employ shallow architectures or localized plasticity rules, limiting their ability to model long-range temporal dependencies.<n>We propose PredNext, which explicitly models temporal relationships through cross-view future Step Prediction and Clip Prediction.
arXiv Detail & Related papers (2025-09-29T14:27:58Z) - STRAP: Spatio-Temporal Pattern Retrieval for Out-of-Distribution Generalization [29.10084723132903]
We propose an innovative Spatio-Temporal Retrieval-Augmented Pattern Learning framework, STRAP.<n>During inference, STRAP retrieves relevant patterns from this library based on similarity to the current input and injects them into the model via a plug-and-play prompting mechanism.<n>Experiments across multiple real-world streaming graph datasets show that STRAP consistently outperforms state-of-the-art STGNN baselines on STOOD tasks.
arXiv Detail & Related papers (2025-05-26T06:11:05Z) - CSTA: Spatial-Temporal Causal Adaptive Learning for Exemplar-Free Video Class-Incremental Learning [62.69917996026769]
A class-incremental learning task requires learning and preserving both spatial appearance and temporal action involvement.<n>We propose a framework that equips separate adapters to learn new class patterns, accommodating the incremental information requirements unique to each class.<n>A causal compensation mechanism is proposed to reduce the conflicts during increment and memorization for between different types of information.
arXiv Detail & Related papers (2025-01-13T11:34:55Z) - ST-ReP: Learning Predictive Representations Efficiently for Spatial-Temporal Forecasting [7.637123047745445]
Self-supervised methods are increasingly adapted to learn spatial-temporal representations.<n>Current value reconstruction and future value prediction are integrated into the pre-training framework.<n>Multi-time scale analysis is incorporated into the self-supervised loss to enhance predictive capability.
arXiv Detail & Related papers (2024-12-19T05:33:55Z) - Enhancing Foundation Models for Time Series Forecasting via Wavelet-based Tokenization [74.3339999119713]
We develop a wavelet-based tokenizer that allows models to learn complex representations directly in the space of time-localized frequencies.<n>Our method first scales and decomposes the input time series, then thresholds and quantizes the wavelet coefficients, and finally pre-trains an autoregressive model to forecast coefficients for the forecast horizon.
arXiv Detail & Related papers (2024-12-06T18:22:59Z) - USTEP: Spatio-Temporal Predictive Learning under A Unified View [62.58464029270846]
We introduce USTEP (Unified S-TEmporal Predictive learning), an innovative framework that reconciles the recurrent-based and recurrent-free methods by integrating both micro-temporal and macro-temporal scales.
arXiv Detail & Related papers (2023-10-09T16:17:42Z) - CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection [53.83593870825628]
One main challenge in time series anomaly detection (TSAD) is the lack of labelled data in many real-life scenarios.
Most of the existing anomaly detection methods focus on learning the normal behaviour of unlabelled time series in an unsupervised manner.
We introduce a novel end-to-end self-supervised ContrAstive Representation Learning approach for time series anomaly detection.
arXiv Detail & Related papers (2023-08-18T04:45:56Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Learning to Reconstruct Missing Data from Spatiotemporal Graphs with
Sparse Observations [11.486068333583216]
This paper tackles the problem of learning effective models to reconstruct missing data points.
We propose a class of attention-based architectures, that given a set of highly sparse observations, learn a representation for points in time and space.
Compared to the state of the art, our model handles sparse data without propagating prediction errors or requiring a bidirectional model to encode forward and backward time dependencies.
arXiv Detail & Related papers (2022-05-26T16:40:48Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.