Are Time-Indexed Foundation Models the Future of Time Series Imputation?
- URL: http://arxiv.org/abs/2511.05980v1
- Date: Sat, 08 Nov 2025 11:57:33 GMT
- Title: Are Time-Indexed Foundation Models the Future of Time Series Imputation?
- Authors: Etienne Le Naour, Tahar Nabil, Adrien Petralia, Ghislain Agoua,
- Abstract summary: Two models, TabPFN-TS and MoTM, share a common philosophy that places them within the family of time-indexed foundation models.<n>This paper presents the first large-scale empirical study of these models for zero-shot imputation.
- Score: 0.6999740786886536
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Foundation models for time series imputation remain largely unexplored. Recently, two such models, TabPFN-TS and MoTM, have emerged. These models share a common philosophy that places them within the family of time-indexed foundation models. This paper presents the first large-scale empirical study of these models for zero-shot imputation, which enables missing value recovery without retraining across a wide range of scenarios. We conduct extensive univariate experiments across 33 out-of-domain datasets (approximately 1.3M imputation windows) and evaluate their ability to integrate covariates at inference time to improve accuracy without fine-tuning. Our results demonstrate that time-indexed foundation models are a powerful and practical step toward achieving general-purpose, zero-shot imputation for real-world time series.
Related papers
- In-Context and Few-Shots Learning for Forecasting Time Series Data based on Large Language Models [0.0]
This paper investigates the performance of using LLM models for time series data prediction.<n>We train LLMs through in-context, zero-shot and few-shot learning and forecasting time series data with OpenAI tt o4-mini and Gemini 2.5 Flash Lite.<n>The findings indicate that TimesFM has the best overall performance with the lowest RMSE value (0.3023) and the competitive inference time (266 seconds)
arXiv Detail & Related papers (2025-12-08T16:52:46Z) - How Foundational are Foundation Models for Time Series Forecasting? [2.692427265051276]
We argue that the inherent diversity of time series data makes foundation models less suited for building effective models.<n>We show that the zero-shot capabilities of a time series foundation model are significantly influenced and tied to the specific domains it has been pretrained on.
arXiv Detail & Related papers (2025-10-01T10:25:43Z) - Estimating Time Series Foundation Model Transferability via In-Context Learning [74.65355820906355]
Time series foundation models (TSFMs) offer strong zero-shot forecasting via large-scale pre-training.<n>Fine-tuning remains critical for boosting performance in domains with limited public data.<n>We introduce TimeTic, a transferability estimation framework that recasts model selection as an in-context-learning problem.
arXiv Detail & Related papers (2025-09-28T07:07:13Z) - MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling [0.0]
We propose a first step to fill a gap by leveraging implicit neural representations (INRs)<n>MoTM combines a basis of INRs, each trained independently on a distinct family of time series, with a ridge regressor that adapts to the observed context at inference.<n>We demonstrate robust in-domain and out-of-domain generalization across diverse imputation scenarios.
arXiv Detail & Related papers (2025-07-17T15:16:30Z) - Intention-Conditioned Flow Occupancy Models [80.42634994902858]
Large-scale pre-training has fundamentally changed how machine learning research is done today.<n>Applying this same framework to reinforcement learning is appealing because it offers compelling avenues for addressing core challenges in RL.<n>Recent advances in generative AI have provided new tools for modeling highly complex distributions.
arXiv Detail & Related papers (2025-06-10T15:27:46Z) - RATFM: Retrieval-augmented Time Series Foundation Model for Anomaly Detection [0.6524530902514115]
We propose a retrieval augmented time series foundation model (RATFM) to incorporate examples of test-time adaptation.<n>RATFM achieves a performance comparable to that of in-domain fine-tuning while avoiding domain-dependent fine-tuning.
arXiv Detail & Related papers (2025-06-02T10:25:35Z) - GIFT-Eval: A Benchmark For General Time Series Forecasting Model Evaluation [90.53485251837235]
Time series foundation models excel in zero-shot forecasting, handling diverse tasks without explicit training.
GIFT-Eval is a pioneering benchmark aimed at promoting evaluation across diverse datasets.
GIFT-Eval encompasses 23 datasets over 144,000 time series and 177 million data points.
arXiv Detail & Related papers (2024-10-14T11:29:38Z) - Implicit Reasoning in Deep Time Series Forecasting [16.750280337155647]
This work takes an initial step toward assessing the reasoning abilities of deep time series forecasting models.
We find that certain linear, patch-based Transformer models generalize effectively in systematically orchestrated out-of-distribution scenarios.
arXiv Detail & Related papers (2024-09-17T02:11:19Z) - ViTime: Foundation Model for Time Series Forecasting Powered by Vision Intelligence [49.60944381032587]
Time series forecasting (TSF) possesses great practical values in various fields, including power and energy, transportation, etc.<n>TSF models have long been known to be problem-specific and lacking application generalizability.<n>This paper proposes a vision intelligence-powered framework, ViTime, for the first time.
arXiv Detail & Related papers (2024-07-10T02:11:01Z) - MOMENT: A Family of Open Time-series Foundation Models [19.0845213853369]
We introduce MOMENT, a family of open-source foundation models for general-purpose time series analysis.
We compile a collection of public time series, called the Time series Pile, and systematically tackle time series-specific challenges.
We build on recent work to design a benchmark to evaluate time series foundation models on diverse tasks and datasets in limited supervision settings.
arXiv Detail & Related papers (2024-02-06T10:48:46Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Lag-Llama: Towards Foundation Models for Probabilistic Time Series
Forecasting [54.04430089029033]
We present Lag-Llama, a general-purpose foundation model for time series forecasting based on a decoder-only transformer architecture.
Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities.
When fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-12T12:29:32Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.