EIDOS: Latent-Space Predictive Learning for Time Series Foundation Models
- URL: http://arxiv.org/abs/2602.14024v1
- Date: Sun, 15 Feb 2026 07:07:20 GMT
- Title: EIDOS: Latent-Space Predictive Learning for Time Series Foundation Models
- Authors: Xinxing Zhou, Qingren Yao, Yiji Zhao, Chenghao Liu, Flora Salim, Xiaojie Yuan, Yanlong Wen, Ming Jin,
- Abstract summary: EIDOS is a foundation model family that shifts pretraining from future value prediction to latent-space predictive learning.<n>We train a causal Transformer to predict the evolution of latent representations, encouraging the emergence of structured and temporally coherent latent states.
- Score: 37.917978019436674
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most time series foundation models are pretrained by directly predicting future observations, which often yields weakly structured latent representations that capture surface noise rather than coherent and predictable temporal dynamics. In this work, we introduce EIDOS, a foundation model family that shifts pretraining from future value prediction to latent-space predictive learning. We train a causal Transformer to predict the evolution of latent representations, encouraging the emergence of structured and temporally coherent latent states. To ensure stable targets for latent-space learning, we design a lightweight aggregation branch to construct target representations. EIDOS is optimized via a joint objective that integrates latent-space alignment, observational grounding to anchor representations to the input signal, and direct forecasting supervision. On the GIFT-Eval benchmark, EIDOS mitigates structural fragmentation in the representation space and achieves state-of-the-art performance. These results demonstrate that constraining models to learn predictable latent dynamics is a principled step toward more robust and reliable time series foundation models.
Related papers
- Distillation and Interpretability of Ensemble Forecasts of ENSO Phase using Entropic Learning [1.3999481573773072]
This paper introduces a framework for an ensemble of Sparse Probabilistic Approximation (eSPA) models to predict ENSO phase up to 24 months in advance.<n>We show how to compress the ensemble into a compact set of "distilled" models by aggregating the structure of only those ensemble members that make correct predictions.
arXiv Detail & Related papers (2026-02-15T05:49:16Z) - Position: Beyond Model-Centric Prediction -- Agentic Time Series Forecasting [49.05788441962762]
We argue for agentic time series forecasting (ATSF), which reframes forecasting as an agentic process composed of perception, planning, action, reflection, and memory.<n>We outline three representative implementation paradigms -- workflow-based design, agentic reinforcement learning, and a hybrid agentic workflow paradigm -- and discuss the opportunities and challenges that arise when shifting from model-centric prediction to agentic forecasting.
arXiv Detail & Related papers (2026-02-02T08:01:11Z) - From Observations to States: Latent Time Series Forecasting [65.98504021691666]
We propose Latent Time Series Forecasting (LatentTSF), a novel paradigm that shifts TSF from observation regression to latent state prediction.<n>Specifically, LatentTSF employs an AutoEncoder to project observations at each time step into a higher-dimensional latent state space.<n>Our proposed latent objectives implicitly maximize mutual information between predicted latent states and ground-truth states and observations.
arXiv Detail & Related papers (2026-01-30T20:39:44Z) - PredNext: Explicit Cross-View Temporal Prediction for Unsupervised Learning in Spiking Neural Networks [70.1286354746363]
Spiking Neural Networks (SNNs) offer a natural platform for unsupervised representation learning.<n>Current unsupervised SNNs employ shallow architectures or localized plasticity rules, limiting their ability to model long-range temporal dependencies.<n>We propose PredNext, which explicitly models temporal relationships through cross-view future Step Prediction and Clip Prediction.
arXiv Detail & Related papers (2025-09-29T14:27:58Z) - A Deep Learning Approach for Spatio-Temporal Forecasting of InSAR Ground Deformation in Eastern Ireland [2.840858735842673]
Monitoring ground displacement is crucial for urban infrastructure and mitigating geological hazards.<n>This paper introduces a novel deep learning framework that transforms sparse point measurements into a dense-temporal tensor.<n>Results demonstrate that the proposed architecture provides more accurate and spatially coherent forecasts.
arXiv Detail & Related papers (2025-09-17T17:10:18Z) - A Time-Series Foundation Model by Universal Delay Embedding [4.221753069966852]
This study introduces Universal Delay Embedding (UDE), a pretrained foundation model designed to revolutionize time-series forecasting.<n>UDE as a dynamical representation of observed data constructs two-dimensional subspace patches from Hankel matrices.<n>In particular, the learned dynamical representations and Koopman operator prediction forms from the patches exhibit exceptional interpretability.
arXiv Detail & Related papers (2025-09-15T16:11:49Z) - iTFKAN: Interpretable Time Series Forecasting with Kolmogorov-Arnold Network [29.310194531870323]
We propose a novel interpretable model, iTFKAN, for credible time series forecasting.<n>iTFKAN enables further exploration of model decision rationales and underlying data patterns due to its interpretability achieved through model symbolization.
arXiv Detail & Related papers (2025-04-23T05:34:49Z) - Topology-Aware Conformal Prediction for Stream Networks [68.02503121089633]
We propose Spatio-Temporal Adaptive Conformal Inference (textttCISTA), a novel framework that integrates network topology and temporal dynamics into the conformal prediction framework.<n>Our results show that textttCISTA effectively balances prediction efficiency and coverage, outperforming existing conformal prediction methods for stream networks.
arXiv Detail & Related papers (2025-03-06T21:21:15Z) - ST-ReP: Learning Predictive Representations Efficiently for Spatial-Temporal Forecasting [7.637123047745445]
Self-supervised methods are increasingly adapted to learn spatial-temporal representations.<n>Current value reconstruction and future value prediction are integrated into the pre-training framework.<n>Multi-time scale analysis is incorporated into the self-supervised loss to enhance predictive capability.
arXiv Detail & Related papers (2024-12-19T05:33:55Z) - Dynamical system prediction from sparse observations using deep neural networks with Voronoi tessellation and physics constraint [12.638698799995815]
We introduce the Dynamic System Prediction from Sparse Observations using Voronoi Tessellation (DSOVT) framework.
By integrating Voronoi tessellations with deep learning models, DSOVT is adept at predicting dynamical systems with sparse, unstructured observations.
Compared to purely data-driven models, our physics-based approach enables the model to learn physical laws within explicitly formulated dynamics.
arXiv Detail & Related papers (2024-08-31T13:43:52Z) - Learning Robust Precipitation Forecaster by Temporal Frame Interpolation [65.5045412005064]
We develop a robust precipitation forecasting model that demonstrates resilience against spatial-temporal discrepancies.
Our approach has led to significant improvements in forecasting precision, culminating in our model securing textit1st place in the transfer learning leaderboard of the textitWeather4cast'23 competition.
arXiv Detail & Related papers (2023-11-30T08:22:08Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv Detail & Related papers (2020-06-19T21:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.