A Comparative Study on How Data Normalization Affects Zero-Shot Generalization in Time Series Foundation Models
- URL: http://arxiv.org/abs/2512.02833v1
- Date: Tue, 02 Dec 2025 14:39:19 GMT
- Title: A Comparative Study on How Data Normalization Affects Zero-Shot Generalization in Time Series Foundation Models
- Authors: Ihab Ahmed, Denis Krompaß, Cheng Feng, Volker Tresp,
- Abstract summary: We investigate input normalization methods for Time-Series Foundation Models (TSFMs)<n>Time-series data, unlike text or images, exhibits significant scale variation across domains and channels, coupled with non-stationarity.<n>We empirically establish REVIN as the most efficient approach, reducing zero-shot MASE by 89% relative to an un-normalized baseline.
- Score: 23.210201090162542
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate input normalization methods for Time-Series Foundation Models (TSFMs). While normalization is well-studied in dataset-specific time-series models, it remains overlooked in TSFMs where generalization is critical. Time-series data, unlike text or images, exhibits significant scale variation across domains and channels, coupled with non-stationarity, can undermine TSFM performance regardless of architectural complexity. Through systematic evaluation across four architecturally diverse TSFMs, we empirically establish REVIN as the most efficient approach, reducing zero-shot MASE by 89\% relative to an un-normalized baseline and by 44\% versus other normalization methods, while matching the best in-domain accuracy (0.84 MASE) without any dataset-level preprocessing -- yielding the highest accuracy-efficiency trade-off. Yet its effect utilization depends on architectural design choices and optimization objective, particularly with respect to training loss scale sensitivity and model type (probabilistic, point-forecast, or LLM-based models).
Related papers
- MEMTS: Internalizing Domain Knowledge via Parameterized Memory for Retrieval-Free Domain Adaptation of Time Series Foundation Models [51.506429027626005]
Memory for Time Series (MEMTS) is a lightweight and plug-and-play method for retrieval-free domain adaptation in time series forecasting.<n>Key component of MEMTS is a Knowledge Persistence Module (KPM), which internalizes domain-specific temporal dynamics.<n>This paradigm shift enables MEMTS to achieve accurate domain adaptation with constant-time inference and near-zero latency.
arXiv Detail & Related papers (2026-02-14T14:00:06Z) - It's TIME: Towards the Next Generation of Time Series Forecasting Benchmarks [87.7937890373758]
Time series foundation models (TSFMs) are revolutionizing the forecasting landscape from specific dataset modeling to generalizable task evaluation.<n>We introduce TIME, a next-generation task-centric benchmark comprising 50 fresh datasets and 98 forecasting tasks.<n>We propose a novel pattern-level evaluation perspective that moves beyond traditional dataset-level evaluations based on static meta labels.
arXiv Detail & Related papers (2026-02-12T16:31:01Z) - Revisiting the Seasonal Trend Decomposition for Enhanced Time Series Forecasting [12.606412750084813]
Building upon the decomposition of the time series, we enhance the architecture of machine learning models for better time series forecasting.<n>We take a different approach with the seasonal component by directly applying backbone models without any normalization or scaling procedures.<n>Our approach consistently yields positive results with around 10% MSE average reduction across four state-of-the-art baselines.
arXiv Detail & Related papers (2026-02-06T23:32:47Z) - A Comparative Study of Adaptation Strategies for Time Series Foundation Models in Anomaly Detection [0.0]
Time series foundation models (TSFMs) are pretrained on large heterogeneous data.<n>We compare zero-shot inference, full model adaptation, and parameter-efficient fine-tuning strategies.<n>These findings position TSFMs as promising general-purpose models for scalable and efficient time series anomaly detection.
arXiv Detail & Related papers (2026-01-01T19:11:33Z) - Time Series Foundation Models for Process Model Forecasting [8.339024524110828]
Process Model Forecasting aims to predict how the control-flow structure of a process evolves over time.<n>Machine learning and deep learning models provide only modest gains over statistical baselines.<n>We investigate Time Series Foundation Models (TSFMs) as an alternative for PMF.
arXiv Detail & Related papers (2025-12-08T15:08:50Z) - Estimating Time Series Foundation Model Transferability via In-Context Learning [74.65355820906355]
Time series foundation models (TSFMs) offer strong zero-shot forecasting via large-scale pre-training.<n>Fine-tuning remains critical for boosting performance in domains with limited public data.<n>We introduce TimeTic, a transferability estimation framework that recasts model selection as an in-context-learning problem.
arXiv Detail & Related papers (2025-09-28T07:07:13Z) - CALM: A Framework for Continuous, Adaptive, and LLM-Mediated Anomaly Detection in Time-Series Streams [0.42970700836450476]
This paper introduces CALM, a novel, end-to-end framework for real-time anomaly detection.<n> CALM is built on the Apache Beam distributed processing framework.<n>It implements a closed-loop, continuous fine-tuning mechanism that allows the anomaly detection model to adapt to evolving data patterns in near real-time.
arXiv Detail & Related papers (2025-08-29T00:27:35Z) - DONOD: Efficient and Generalizable Instruction Fine-Tuning for LLMs via Model-Intrinsic Dataset Pruning [22.704995231753397]
Ad-hoc instruction fine-tuning of large language models (LLMs) is widely adopted for domain-specific adaptation.<n>We propose DONOD, a lightweight model-intrinsic data pruning method.<n>By filtering out 70% of the whole dataset, we improve target-domain accuracy by 14.90% and cross-domain accuracy by 5.67%.
arXiv Detail & Related papers (2025-04-21T02:25:03Z) - Time-Series Foundation AI Model for Value-at-Risk Forecasting [9.090616417812306]
This study is the first to analyze the performance of a time-series foundation AI model for Value-at-Risk (VaR)<n>Foundation models, pre-trained on diverse datasets, can be applied in a zero-shot setting with minimal data.<n>Fine-tuning significantly improves accuracy, showing that zero-shot use is not optimal for VaR.
arXiv Detail & Related papers (2024-10-15T16:53:44Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Model-Agnostic Multitask Fine-tuning for Few-shot Vision-Language
Transfer Learning [59.38343286807997]
We propose Model-Agnostic Multitask Fine-tuning (MAMF) for vision-language models on unseen tasks.
Compared with model-agnostic meta-learning (MAML), MAMF discards the bi-level optimization and uses only first-order gradients.
We show that MAMF consistently outperforms the classical fine-tuning method for few-shot transfer learning on five benchmark datasets.
arXiv Detail & Related papers (2022-03-09T17:26:53Z) - The Effectiveness of Discretization in Forecasting: An Empirical Study
on Neural Time Series Models [15.281725756608981]
We investigate the effect of data input and output transformations on the predictive performance of neural forecasting architectures.
We find that binning almost always improves performance compared to using normalized real-valued inputs.
arXiv Detail & Related papers (2020-05-20T15:09:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.