Modèles de Fondation et Ajustement : Vers une Nouvelle Génération de Modèles pour la Prévision des Séries Temporelles
- URL: http://arxiv.org/abs/2511.22674v1
- Date: Thu, 27 Nov 2025 18:19:20 GMT
- Title: Modèles de Fondation et Ajustement : Vers une Nouvelle Génération de Modèles pour la Prévision des Séries Temporelles
- Authors: Morad Laglil, Emilie Devijver, Eric Gaussier, Bertrand Pracca,
- Abstract summary: Foundations models have been developed for zero-shot time series forecasting.<n>These models learn generalizable representations for both point and probabilistic forecasting.<n>We study the effect of fine-tuning after pretraining to enhance their performance on specific datasets.
- Score: 26.28141834580785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inspired by recent advances in large language models, foundation models have been developed for zero-shot time series forecasting, enabling prediction on datasets unseen during pretraining. These large-scale models, trained on vast collections of time series, learn generalizable representations for both point and probabilistic forecasting, reducing the need for task-specific architectures and manual tuning. In this work, we review the main architectures, pretraining strategies, and optimization methods used in such models, and study the effect of fine-tuning after pretraining to enhance their performance on specific datasets. Our empirical results show that fine-tuning generally improves zero-shot forecasting capabilities, especially for long-term horizons.
Related papers
- Pre-trained Forecasting Models: Strong Zero-Shot Feature Extractors for Time Series Classification [19.714904955821623]
We show that the best forecasting models achieve classification accuracy that matches or even surpasses that of state-of-the-art models pre-trained specifically for classification.<n>These findings challenge the assumption that task-specific pre-training is necessary, and suggest that learning to forecast may provide a powerful route toward constructing general-purpose time series foundation models.
arXiv Detail & Related papers (2025-10-30T17:55:23Z) - How Foundational are Foundation Models for Time Series Forecasting? [2.692427265051276]
We argue that the inherent diversity of time series data makes foundation models less suited for building effective models.<n>We show that the zero-shot capabilities of a time series foundation model are significantly influenced and tied to the specific domains it has been pretrained on.
arXiv Detail & Related papers (2025-10-01T10:25:43Z) - Estimating Time Series Foundation Model Transferability via In-Context Learning [74.65355820906355]
Time series foundation models (TSFMs) offer strong zero-shot forecasting via large-scale pre-training.<n>Fine-tuning remains critical for boosting performance in domains with limited public data.<n>We introduce TimeTic, a transferability estimation framework that recasts model selection as an in-context-learning problem.
arXiv Detail & Related papers (2025-09-28T07:07:13Z) - ARIES: Relation Assessment and Model Recommendation for Deep Time Series Forecasting [54.57031153712623]
ARIES is a framework for assessing relation between time series properties and modeling strategies.<n>We propose the first deep forecasting model recommender, capable of providing interpretable suggestions for real-world time series.
arXiv Detail & Related papers (2025-09-07T13:57:14Z) - TimeRAF: Retrieval-Augmented Foundation model for Zero-shot Time Series Forecasting [59.702504386429126]
TimeRAF is a Retrieval-Augmented Forecasting model that enhance zero-shot time series forecasting through retrieval-augmented techniques.<n>TimeRAF employs an end-to-end learnable retriever to extract valuable information from the knowledge base.
arXiv Detail & Related papers (2024-12-30T09:06:47Z) - Generative Pretrained Hierarchical Transformer for Time Series Forecasting [3.739587363053192]
We propose a novel generative pretrained hierarchical transformer architecture for forecasting, named textbfGPHT.
We conduct sufficient experiments on eight datasets with mainstream self-supervised pretraining models and supervised models.
The results demonstrated that GPHT surpasses the baseline models across various fine-tuning and zero/few-shot learning settings in the traditional long-term forecasting task.
arXiv Detail & Related papers (2024-02-26T11:54:54Z) - Predictive Churn with the Set of Good Models [61.00058053669447]
This paper explores connections between two seemingly unrelated concepts of predictive inconsistency.<n>The first, known as predictive multiplicity, occurs when models that perform similarly produce conflicting predictions for individual samples.<n>The second concept, predictive churn, examines the differences in individual predictions before and after model updates.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - A decoder-only foundation model for time-series forecasting [23.824504640087753]
Our model is based on pretraining a patched-decoder style attention model on a large time-series corpus.
It can work well across different forecasting history lengths, prediction lengths and temporal granularities.
arXiv Detail & Related papers (2023-10-14T17:01:37Z) - Lag-Llama: Towards Foundation Models for Probabilistic Time Series
Forecasting [54.04430089029033]
We present Lag-Llama, a general-purpose foundation model for time series forecasting based on a decoder-only transformer architecture.
Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities.
When fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-12T12:29:32Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.