One-Embedding-Fits-All: Efficient Zero-Shot Time Series Forecasting by a Model Zoo
- URL: http://arxiv.org/abs/2509.04208v2
- Date: Thu, 25 Sep 2025 13:20:23 GMT
- Title: One-Embedding-Fits-All: Efficient Zero-Shot Time Series Forecasting by a Model Zoo
- Authors: Hao-Nan Shi, Ting-Ji Huang, Lu Han, De-Chuan Zhan, Han-Jia Ye,
- Abstract summary: Time Series Foundation Models (TSFMs) have significantly advanced zero-shot forecasting.<n>No single TSFM excels universally, as different models exhibit preferences for distinct temporal patterns.<n>We propose ZooCast, which characterizes each model's distinct forecasting strengths.
- Score: 82.65837388129746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of Time Series Foundation Models (TSFMs) has significantly advanced zero-shot forecasting, enabling predictions for unseen time series without task-specific fine-tuning. Extensive research has confirmed that no single TSFM excels universally, as different models exhibit preferences for distinct temporal patterns. This diversity suggests an opportunity: how to take advantage of the complementary abilities of TSFMs. To this end, we propose ZooCast, which characterizes each model's distinct forecasting strengths. ZooCast can intelligently assemble current TSFMs into a model zoo that dynamically selects optimal models for different forecasting tasks. Our key innovation lies in the One-Embedding-Fits-All paradigm that constructs a unified representation space where each model in the zoo is represented by a single embedding, enabling efficient similarity matching for all tasks. Experiments demonstrate ZooCast's strong performance on the GIFT-Eval zero-shot forecasting benchmark while maintaining the efficiency of a single TSFM. In real-world scenarios with sequential model releases, the framework seamlessly adds new models for progressive accuracy gains with negligible overhead.
Related papers
- Synapse: Adaptive Arbitration of Complementary Expertise in Time Series Foundational Models [50.877082340479085]
We study how different Time Series Foundational Models (TSFMs) exhibit specialized performance profiles across various forecasting settings.<n>We propose Synapse, a novel arbitration framework for TSFMs.<n>Results demonstrate that Synapse consistently outperforms other popular ensembling techniques as well as individual TSFMs.
arXiv Detail & Related papers (2025-11-07T18:01:51Z) - UniCast: A Unified Multimodal Prompting Framework for Time Series Forecasting [9.836278124939453]
Time series forecasting is a foundational task across domains, such as finance, healthcare, and environmental monitoring.<n>Existing models operate predominantly in a unimodal setting, ignoring the rich multimodal context, such as visual and textual signals, that often accompanies time series data in real-world scenarios.<n>This paper introduces a novel parameter-efficient multimodal framework, UniCast, that extends TSFMs to jointly leverage time series, vision, text modalities for enhanced forecasting performance.
arXiv Detail & Related papers (2025-08-16T07:33:27Z) - Multi-Scale Finetuning for Encoder-based Time Series Foundation Models [56.503053716053]
Time series foundation models (TSFMs) demonstrate impressive zero-shot performance for time series forecasting.<n>We argue that it falls short of fully leveraging TSFMs' capabilities, often resulting in overfitting and suboptimal performance.<n>We propose textbftextscfinetextbftextsctuning (textbfMSFT), a simple yet general framework that explicitly integrates multi-scale modeling into the finetuning process.
arXiv Detail & Related papers (2025-06-17T01:06:01Z) - Less is More: Unlocking Specialization of Time Series Foundation Models via Structured Pruning [27.23328609888911]
Time Series Foundation Models pre-train vast parameters and achieve remarkable zero-shot forecasting performance.<n>Surprisingly, even after fine-tuning, TSFMs cannot consistently outperform smaller, specialized models trained on full-shot downstream data.<n>We propose a structured pruning method to regularize the subsequent fine-tuning process by focusing it on a more relevant and compact parameter space.
arXiv Detail & Related papers (2025-05-29T07:33:49Z) - Breaking Silos: Adaptive Model Fusion Unlocks Better Time Series Forecasting [64.45587649141842]
Time-series forecasting plays a critical role in many real-world applications.<n>No single model consistently outperforms others across different test samples, but instead (ii) each model excels in specific cases.<n>We introduce TimeFuse, a framework for collective time-series forecasting with sample-level adaptive fusion of heterogeneous models.
arXiv Detail & Related papers (2025-05-24T00:45:07Z) - Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts [103.725112190618]
This paper introduces Moirai-MoE, using a single input/output projection layer while delegating the modeling of diverse time series patterns to the sparse mixture of experts.
Extensive experiments on 39 datasets demonstrate the superiority of Moirai-MoE over existing foundation models in both in-distribution and zero-shot scenarios.
arXiv Detail & Related papers (2024-10-14T13:01:11Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Predict, Refine, Synthesize: Self-Guiding Diffusion Models for
Probabilistic Time Series Forecasting [10.491628898499684]
We propose TSDiff, an unconditionally-trained diffusion model for time series.
Our proposed self-guidance mechanism enables conditioning TSDiff for downstream tasks during inference, without requiring auxiliary networks or altering the training procedure.
We demonstrate the effectiveness of our method on three different time series tasks: forecasting, refinement, and synthetic data generation.
arXiv Detail & Related papers (2023-07-21T10:56:36Z) - pTSE: A Multi-model Ensemble Method for Probabilistic Time Series
Forecasting [10.441994923253596]
pTSE is a multi-model distribution ensemble method for probabilistic forecasting based on Hidden Markov Model (HMM)
We provide a complete theoretical analysis of pTSE to prove that the empirical distribution of time series subject to an HMM will converge to the stationary distribution almost surely.
Experiments on benchmarks show the superiority of pTSE overall member models and competitive ensemble methods.
arXiv Detail & Related papers (2023-05-16T07:00:57Z) - Model ensemble instead of prompt fusion: a sample-specific knowledge
transfer method for few-shot prompt tuning [85.55727213502402]
We focus on improving the few-shot performance of prompt tuning by transferring knowledge from soft prompts of source tasks.
We propose Sample-specific Ensemble of Source Models (SESoM)
SESoM learns to adjust the contribution of each source model for each target sample separately when ensembling source model outputs.
arXiv Detail & Related papers (2022-10-23T01:33:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.