Learning with Calibration: Exploring Test-Time Computing of Spatio-Temporal Forecasting
- URL: http://arxiv.org/abs/2506.00635v2
- Date: Wed, 29 Oct 2025 08:25:53 GMT
- Title: Learning with Calibration: Exploring Test-Time Computing of Spatio-Temporal Forecasting
- Authors: Wei Chen, Yuxuan Liang,
- Abstract summary: We propose a novel test-time computing paradigm, namely learning with calibration, ST-TTC, for S-temporal forecasting.<n>We aim to capture periodic structural biases arising from non-stationarity during the testing phase and perform real-time bias correction on predictions to improve accuracy.
- Score: 40.9030781267984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatio-temporal forecasting is crucial in many domains, such as transportation, meteorology, and energy. However, real-world scenarios frequently present challenges such as signal anomalies, noise, and distributional shifts. Existing solutions primarily enhance robustness by modifying network architectures or training procedures. Nevertheless, these approaches are computationally intensive and resource-demanding, especially for large-scale applications. In this paper, we explore a novel test-time computing paradigm, namely learning with calibration, ST-TTC, for spatio-temporal forecasting. Through learning with calibration, we aim to capture periodic structural biases arising from non-stationarity during the testing phase and perform real-time bias correction on predictions to improve accuracy. Specifically, we first introduce a spectral-domain calibrator with phase-amplitude modulation to mitigate periodic shift and then propose a flash updating mechanism with a streaming memory queue for efficient test-time computation. ST-TTC effectively bypasses complex training-stage techniques, offering an efficient and generalizable paradigm. Extensive experiments on real-world datasets demonstrate the effectiveness, universality, flexibility and efficiency of our proposed method.
Related papers
- Learning from Complexity: Exploring Dynamic Sample Pruning of Spatio-Temporal Training [36.98769959300113]
Training deep learning models on massive, often redundant datasets presents a significant computational bottleneck.<n>In this paper, we explore a novel training techniques, namely learning from complexity with dynamic sample pruning.<n>We show that ST-Prune significantly accelerates the training speed while maintaining or even improving the model performance.
arXiv Detail & Related papers (2026-02-22T10:11:04Z) - MEMTS: Internalizing Domain Knowledge via Parameterized Memory for Retrieval-Free Domain Adaptation of Time Series Foundation Models [51.506429027626005]
Memory for Time Series (MEMTS) is a lightweight and plug-and-play method for retrieval-free domain adaptation in time series forecasting.<n>Key component of MEMTS is a Knowledge Persistence Module (KPM), which internalizes domain-specific temporal dynamics.<n>This paradigm shift enables MEMTS to achieve accurate domain adaptation with constant-time inference and near-zero latency.
arXiv Detail & Related papers (2026-02-14T14:00:06Z) - A Comparative Study of Adaptation Strategies for Time Series Foundation Models in Anomaly Detection [0.0]
Time series foundation models (TSFMs) are pretrained on large heterogeneous data.<n>We compare zero-shot inference, full model adaptation, and parameter-efficient fine-tuning strategies.<n>These findings position TSFMs as promising general-purpose models for scalable and efficient time series anomaly detection.
arXiv Detail & Related papers (2026-01-01T19:11:33Z) - A Unified Frequency Domain Decomposition Framework for Interpretable and Robust Time Series Forecasting [81.73338008264115]
Current approaches for time series forecasting, whether in the time or frequency domain, predominantly use deep learning models based on linear layers or transformers.<n>We propose FIRE, a unified frequency domain decomposition framework that provides a mathematical abstraction for diverse types of time series.<n>Fire consistently outperforms state-of-the-art models on long-term forecasting benchmarks.
arXiv Detail & Related papers (2025-10-11T09:59:25Z) - Adaptive Reinforcement Learning for Dynamic Configuration Allocation in Pre-Production Testing [4.370892281528124]
We introduce a novel reinforcement learning framework that recasts configuration allocation as a sequential decision-making problem.<n>Our method is the first to integrate Q-learning with a hybrid reward design that fuses simulated outcomes and real-time feedback.
arXiv Detail & Related papers (2025-10-02T05:12:28Z) - Rethinking Irregular Time Series Forecasting: A Simple yet Effective Baseline [12.66709671516384]
We propose a general framework called APN to address these challenges.<n>We design a novel Time-Aware Patch Aggregation (TAPA) module that achieves that adaptive patching.<n>We use a simple query module to effectively integrate historical information while maintaining the model's efficiency.<n> Experimental results on multiple real-world datasets show that APN outperforms existing state-of-the-art methods in both efficiency and accuracy.
arXiv Detail & Related papers (2025-05-16T13:42:00Z) - STTS-EAD: Improving Spatio-Temporal Learning Based Time Series Prediction via [7.247017092359663]
We propose STTS-EAD, an end-to-end method that seamlessly integrates anomaly into the training process of time series forecasting.<n>Our proposed STTS-EAD leveragestemporal information for forecasting and anomaly detection, with the two parts alternately executed and optimized for each other.<n>Our experiments show that our proposed method can effectively process anomalies detected in the training stage to improve forecasting performance in the inference stage and significantly outperform baselines.
arXiv Detail & Related papers (2025-01-14T03:26:05Z) - Neural Conformal Control for Time Series Forecasting [54.96087475179419]
We introduce a neural network conformal prediction method for time series that enhances adaptivity in non-stationary environments.<n>Our approach acts as a neural controller designed to achieve desired target coverage, leveraging auxiliary multi-view data with neural network encoders.<n>We empirically demonstrate significant improvements in coverage and probabilistic accuracy, and find that our method is the only one that combines good calibration with consistency in prediction intervals.
arXiv Detail & Related papers (2024-12-24T03:56:25Z) - SONNET: Enhancing Time Delay Estimation by Leveraging Simulated Audio [17.811771707446926]
We show that learning based methods can, even based on synthetic data, significantly outperform GCC-PHAT on novel real world data.
We provide our trained model, SONNET, which is runnable in real-time and works on novel data out of the box for many real data applications.
arXiv Detail & Related papers (2024-11-20T10:23:21Z) - Locally Adaptive One-Class Classifier Fusion with Dynamic $\ell$p-Norm Constraints for Robust Anomaly Detection [17.93058599783703]
We introduce a framework that dynamically adjusts fusion weights based on local data characteristics.
Our method incorporates an interior-point optimization technique that significantly improves computational efficiency.
The framework's ability to adapt to local data patterns while maintaining computational efficiency makes it particularly valuable for real-time applications.
arXiv Detail & Related papers (2024-11-10T09:57:13Z) - Asymptotic Analysis of Sample-averaged Q-learning [2.2374171443798034]
This paper introduces a framework for time-varying batch-averaged Q-learning, termed sample-averaged Q-learning (SA-QL)<n>We leverage the functional central limit of the sample-averaged algorithm under mild conditions and develop a random scaling method for interval estimation.<n>This work establishes a unified theoretical foundation for sample-averaged Q-learning, providing insights into effective batch scheduling and statistical inference for RL algorithms.
arXiv Detail & Related papers (2024-10-14T17:17:19Z) - IT$^3$: Idempotent Test-Time Training [95.78053599609044]
Deep learning models often struggle when deployed in real-world settings due to distribution shifts between training and test data.<n>We present Idempotent Test-Time Training (IT$3$), a novel approach that enables on-the-fly adaptation to distribution shifts using only the current test instance.<n>Our results suggest that idempotence provides a universal principle for test-time adaptation that generalizes across domains and architectures.
arXiv Detail & Related papers (2024-10-05T15:39:51Z) - C-TPT: Calibrated Test-Time Prompt Tuning for Vision-Language Models via Text Feature Dispersion [54.81141583427542]
In deep learning, test-time adaptation has gained attention as a method for model fine-tuning without the need for labeled data.
This paper explores calibration during test-time prompt tuning by leveraging the inherent properties of CLIP.
We present a novel method, Calibrated Test-time Prompt Tuning (C-TPT), for optimizing prompts during test-time with enhanced calibration.
arXiv Detail & Related papers (2024-03-21T04:08:29Z) - QBSD: Quartile-Based Seasonality Decomposition for Cost-Effective RAN KPI Forecasting [0.18416014644193066]
We introduce QBSD, a live single-step forecasting approach tailored to optimize the trade-off between accuracy and computational complexity.
QBSD has shown significant success with our real network RAN datasets of over several thousand cells.
Results demonstrate that the proposed method excels in runtime efficiency compared to the leading algorithms available.
arXiv Detail & Related papers (2023-06-09T15:59:27Z) - DELTA: degradation-free fully test-time adaptation [59.74287982885375]
We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
arXiv Detail & Related papers (2023-01-30T15:54:00Z) - Self-Adaptive Forecasting for Improved Deep Learning on Non-Stationary
Time-Series [20.958959332978726]
SAF integrates a self-adaptation stage prior to forecasting based on backcasting'
Our method enables efficient adaptation of encoded representations to evolving distributions, leading to superior generalization.
On synthetic and real-world datasets in domains where time-series data are known to be notoriously non-stationary, such as healthcare and finance, we demonstrate a significant benefit of SAF.
arXiv Detail & Related papers (2022-02-04T21:54:10Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Unsupervised Domain Adaptation for Spatio-Temporal Action Localization [69.12982544509427]
S-temporal action localization is an important problem in computer vision.
We propose an end-to-end unsupervised domain adaptation algorithm.
We show that significant performance gain can be achieved when spatial and temporal features are adapted separately or jointly.
arXiv Detail & Related papers (2020-10-19T04:25:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.