TimeCapsule: Solving the Jigsaw Puzzle of Long-Term Time Series Forecasting with Compressed Predictive Representations
- URL: http://arxiv.org/abs/2504.12721v1
- Date: Thu, 17 Apr 2025 07:54:26 GMT
- Title: TimeCapsule: Solving the Jigsaw Puzzle of Long-Term Time Series Forecasting with Compressed Predictive Representations
- Authors: Yihang Lu, Yangyang Xu, Qitao Qing, Xianwei Meng,
- Abstract summary: We introduce TimeCapsule, a model built around the principle of high-dimensional information compression.<n>We propose an internal forecast within the compressed representation domain, supported by the Joint-Embedding Predictive Architecture.
- Score: 13.691725113410856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent deep learning models for Long-term Time Series Forecasting (LTSF) often emphasize complex, handcrafted designs, while simpler architectures like linear models or MLPs have often outperformed these intricate solutions. In this paper, we revisit and organize the core ideas behind several key techniques, such as redundancy reduction and multi-scale modeling, which are frequently employed in advanced LTSF models. Our goal is to streamline these ideas for more efficient deep learning utilization. To this end, we introduce TimeCapsule, a model built around the principle of high-dimensional information compression that unifies these techniques in a generalized yet simplified framework. Specifically, we model time series as a 3D tensor, incorporating temporal, variate, and level dimensions, and leverage mode production to capture multi-mode dependencies while achieving dimensionality compression. We propose an internal forecast within the compressed representation domain, supported by the Joint-Embedding Predictive Architecture (JEPA), to monitor the learning of predictive representations. Extensive experiments on challenging benchmarks demonstrate the versatility of our method, showing that TimeCapsule can achieve state-of-the-art performance.
Related papers
- UniSTD: Towards Unified Spatio-Temporal Learning across Diverse Disciplines [64.84631333071728]
We introduce bfUnistage, a unified Transformer-based framework fortemporal modeling.<n>Our work demonstrates that a task-specific vision-text can build a generalizable model fortemporal learning.<n>We also introduce a temporal module to incorporate temporal dynamics explicitly.
arXiv Detail & Related papers (2025-03-26T17:33:23Z) - Exploring the Role of Explicit Temporal Modeling in Multimodal Large Language Models for Video Understanding [23.477954901326978]
Existing approaches adopt either implicit temporal modeling, relying solely on the decoder, or explicit temporal modeling, employing auxiliary temporal encoders.<n>We propose the explicit Temporal (STE) to enable flexible explicit temporal modeling with adjustable receptive temporal fields and token compression ratios.<n>Our findings emphasize the critical role of explicit temporal modeling, providing actionable insights to advance video MLLMs.
arXiv Detail & Related papers (2025-01-28T08:30:58Z) - Exploring Representations and Interventions in Time Series Foundation Models [17.224575072056627]
Time series foundation models (TSFMs) promise to be powerful tools for a wide range of applications.<n>Their internal representations and learned concepts are still not well understood.<n>This study investigates the structure and redundancy of representations across various TSFMs.
arXiv Detail & Related papers (2024-09-19T17:11:27Z) - Temporal Feature Matters: A Framework for Diffusion Model Quantization [105.3033493564844]
Diffusion models rely on the time-step for the multi-round denoising.
We introduce a novel quantization framework that includes three strategies.
This framework preserves most of the temporal information and ensures high-quality end-to-end generation.
arXiv Detail & Related papers (2024-07-28T17:46:15Z) - LTSM-Bundle: A Toolbox and Benchmark on Large Language Models for Time Series Forecasting [69.33802286580786]
We introduce LTSM-Bundle, a comprehensive toolbox, and benchmark for training LTSMs.<n>It modularized and benchmarked LTSMs from multiple dimensions, encompassing prompting strategies, tokenization approaches, base model selection, data quantity, and dataset diversity.<n> Empirical results demonstrate that this combination achieves superior zero-shot and few-shot performances compared to state-of-the-art LTSMs and traditional TSF methods.
arXiv Detail & Related papers (2024-06-20T07:09:19Z) - ConvTimeNet: A Deep Hierarchical Fully Convolutional Model for Multivariate Time Series Analysis [7.979501926410114]
ConvTimeNet is a hierarchical pure convolutional model designed for time series analysis.<n>It adaptively perceives local patterns of temporally dependent basic units in a data-driven manner.<n>A large kernel mechanism is employed to ensure that convolutional blocks can be deeply stacked.
arXiv Detail & Related papers (2024-03-03T12:05:49Z) - PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations [49.80959046861793]
We present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers.
Our experimentation across seven diversetemporal real-world LMTF datasets reveals that PDETime adapts effectively to the intrinsic nature of the data.
arXiv Detail & Related papers (2024-02-25T17:39:44Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Parsimony or Capability? Decomposition Delivers Both in Long-term Time Series Forecasting [46.63798583414426]
Long-term time series forecasting (LTSF) represents a critical frontier in time series analysis.
Our study demonstrates, through both analytical and empirical evidence, that decomposition is key to containing excessive model inflation.
Remarkably, by tailoring decomposition to the intrinsic dynamics of time series data, our proposed model outperforms existing benchmarks.
arXiv Detail & Related papers (2024-01-22T13:15:40Z) - GPT-ST: Generative Pre-Training of Spatio-Temporal Graph Neural Networks [24.323017830938394]
This work aims to address challenges by introducing a pre-training framework that seamlessly integrates with baselines and enhances their performance.
The framework is built upon two key designs: (i) We propose a.
apple-to-apple mask autoencoder as a pre-training model for learning-temporal dependencies.
These modules are specifically designed to capture intra-temporal customized representations and semantic- and inter-cluster relationships.
arXiv Detail & Related papers (2023-11-07T02:36:24Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - SOM-CPC: Unsupervised Contrastive Learning with Self-Organizing Maps for
Structured Representations of High-Rate Time Series [23.074319429090092]
We propose SOM-CPC, a model that visualizes data in an organized 2D manifold, while preserving higher-dimensional information.
We show on both synthetic and real-life data (physiological data and audio recordings) that SOM-CPC outperforms strong baselines like DL-based feature extraction.
arXiv Detail & Related papers (2022-05-31T15:21:21Z) - Numerical Weather Forecasting using Convolutional-LSTM with Attention
and Context Matcher Mechanisms [10.759556555869798]
We introduce a novel deep learning architecture for forecasting high-resolution weather data.
Our Weather Model achieves significant performance improvements compared to baseline deep learning models.
arXiv Detail & Related papers (2021-02-01T08:30:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.