Non-collective Calibrating Strategy for Time Series Forecasting
- URL: http://arxiv.org/abs/2506.03176v2
- Date: Wed, 02 Jul 2025 12:02:03 GMT
- Title: Non-collective Calibrating Strategy for Time Series Forecasting
- Authors: Bin Wang, Yongqi Han, Minbo Ma, Tianrui Li, Junbo Zhang, Feng Hong, Yanwei Yu,
- Abstract summary: Complex dynamics of time series make it challenging to establish the rule of thumb for designing the golden model architecture.<n>We argue that refining existing advanced models through a universal calibrating strategy can deliver substantial benefits with minimal resource costs.<n>We propose an innovative calibrating strategy called Socket+Plug (SoP)
- Score: 21.202820492929224
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep learning-based approaches have demonstrated significant advancements in time series forecasting. Despite these ongoing developments, the complex dynamics of time series make it challenging to establish the rule of thumb for designing the golden model architecture. In this study, we argue that refining existing advanced models through a universal calibrating strategy can deliver substantial benefits with minimal resource costs, as opposed to elaborating and training a new model from scratch. We first identify a multi-target learning conflict in the calibrating process, which arises when optimizing variables across time steps, leading to the underutilization of the model's learning capabilities. To address this issue, we propose an innovative calibrating strategy called Socket+Plug (SoP). This approach retains an exclusive optimizer and early-stopping monitor for each predicted target within each Plug while keeping the fully trained Socket backbone frozen. The model-agnostic nature of SoP allows it to directly calibrate the performance of any trained deep forecasting models, regardless of their specific architectures. Extensive experiments on various time series benchmarks and a spatio-temporal meteorological ERA5 dataset demonstrate the effectiveness of SoP, achieving up to a 22% improvement even when employing a simple MLP as the Plug (highlighted in Figure 1). Code is available at https://github.com/hanyuki23/SoP.
Related papers
- Revisiting the Seasonal Trend Decomposition for Enhanced Time Series Forecasting [12.606412750084813]
Building upon the decomposition of the time series, we enhance the architecture of machine learning models for better time series forecasting.<n>We take a different approach with the seasonal component by directly applying backbone models without any normalization or scaling procedures.<n>Our approach consistently yields positive results with around 10% MSE average reduction across four state-of-the-art baselines.
arXiv Detail & Related papers (2026-02-06T23:32:47Z) - ARIES: Relation Assessment and Model Recommendation for Deep Time Series Forecasting [54.57031153712623]
ARIES is a framework for assessing relation between time series properties and modeling strategies.<n>We propose the first deep forecasting model recommender, capable of providing interpretable suggestions for real-world time series.
arXiv Detail & Related papers (2025-09-07T13:57:14Z) - PTMs-TSCIL Pre-Trained Models Based Class-Incremental Learning [7.784244204592032]
Class-incremental learning (CIL) for time series data faces challenges in balancing stability against catastrophic forgetting and plasticity for new knowledge acquisition.<n>We present the first exploration of PTM-based Time Series Class-Incremental Learning (TSCIL)
arXiv Detail & Related papers (2025-03-10T10:27:21Z) - Enhancing Foundation Models for Time Series Forecasting via Wavelet-based Tokenization [74.3339999119713]
We develop a wavelet-based tokenizer that allows models to learn complex representations directly in the space of time-localized frequencies.<n>Our method first scales and decomposes the input time series, then thresholds and quantizes the wavelet coefficients, and finally pre-trains an autoregressive model to forecast coefficients for the forecast horizon.
arXiv Detail & Related papers (2024-12-06T18:22:59Z) - FlowScope: Enhancing Decision Making by Time Series Forecasting based on Prediction Optimization using HybridFlow Forecast Framework [0.0]
Time series forecasting is crucial in several sectors, such as meteorology, retail, healthcare, and finance.
We propose FlowScope which offers a versatile and robust platform for predicting time series data.
This empowers enterprises to make informed decisions and optimize long-term strategies for maximum performance.
arXiv Detail & Related papers (2024-11-16T06:25:30Z) - Learning Pattern-Specific Experts for Time Series Forecasting Under Patch-level Distribution Shift [51.01356105618118]
Time series often exhibit complex non-uniform distribution with varying patterns across segments, such as season, operating condition, or semantic meaning.<n>Existing approaches, which typically train a single model to capture all these diverse patterns, often struggle with the pattern drifts between patches.<n>We propose TFPS, a novel architecture that leverages pattern-specific experts for more accurate and adaptable time series forecasting.
arXiv Detail & Related papers (2024-10-13T13:35:29Z) - AALF: Almost Always Linear Forecasting [3.336367986372977]
We argue that simple models are good enough most of the time, and that forecasting performance could be improved by choosing a Deep Learning method only for few, important predictions.<n>An empirical study on various real-world datasets shows that our selection methodology performs comparable to state-of-the-art online model selections methods in most cases.<n>We find that almost always choosing a simple autoregressive linear model for forecasting results in competitive performance.
arXiv Detail & Related papers (2024-09-16T10:13:09Z) - ReAugment: Model Zoo-Guided RL for Few-Shot Time Series Augmentation and Forecasting [74.00765474305288]
We present a pilot study on using reinforcement learning (RL) for time series data augmentation.<n>Our method, ReAugment, tackles three critical questions: which parts of the training set should be augmented, how the augmentation should be performed, and what advantages RL brings to the process.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - RPMixer: Shaking Up Time Series Forecasting with Random Projections for Large Spatial-Temporal Data [33.0546525587517]
We propose a all-Multi-Layer Perceptron (all-MLP) time series forecasting architecture called RPMixer.
Our method capitalizes on the ensemble-like behavior of deep neural networks, where each individual block behaves like a base learner in an ensemble model.
arXiv Detail & Related papers (2024-02-16T07:28:59Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Embedded feature selection in LSTM networks with multi-objective
evolutionary ensemble learning for time series forecasting [49.1574468325115]
We present a novel feature selection method embedded in Long Short-Term Memory networks.
Our approach optimize the weights and biases of the LSTM in a partitioned manner.
Experimental evaluations on air quality time series data from Italy and southeast Spain demonstrate that our method substantially improves the ability generalization of conventional LSTMs.
arXiv Detail & Related papers (2023-12-29T08:42:10Z) - TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting [24.834846119163885]
We propose a novel framework, TEMPO, that can effectively learn time series representations.
TEMPO expands the capability for dynamically modeling real-world temporal phenomena from data within diverse domains.
arXiv Detail & Related papers (2023-10-08T00:02:25Z) - TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series [57.4208255711412]
Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS)
We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks.
arXiv Detail & Related papers (2023-10-02T16:45:19Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Meta-Forecasting by combining Global DeepRepresentations with Local
Adaptation [12.747008878068314]
We introduce a novel forecasting method called Meta Global-Local Auto-Regression (Meta-GLAR)
It adapts to each time series by learning in closed-form the mapping from the representations produced by a recurrent neural network (RNN) to one-step-ahead forecasts.
Our method is competitive with the state-of-the-art in out-of-sample forecasting accuracy reported in earlier work.
arXiv Detail & Related papers (2021-11-05T11:45:02Z) - Learning representations with end-to-end models for improved remaining
useful life prognostics [64.80885001058572]
The remaining Useful Life (RUL) of equipment is defined as the duration between the current time and its failure.
We propose an end-to-end deep learning model based on multi-layer perceptron and long short-term memory layers (LSTM) to predict the RUL.
We will discuss how the proposed end-to-end model is able to achieve such good results and compare it to other deep learning and state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T16:45:18Z) - Boosted Embeddings for Time Series Forecasting [0.6042845803090501]
We propose a novel time series forecast model, DeepGB.
We formulate and implement a variant of Gradient boosting wherein the weak learners are DNNs whose weights are incrementally found in a greedy manner over iterations.
We demonstrate that our model outperforms existing comparable state-of-the-art models using real-world sensor data and public dataset.
arXiv Detail & Related papers (2021-04-10T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.