TimePerceiver: An Encoder-Decoder Framework for Generalized Time-Series Forecasting
- URL: http://arxiv.org/abs/2512.22550v1
- Date: Sat, 27 Dec 2025 10:34:22 GMT
- Title: TimePerceiver: An Encoder-Decoder Framework for Generalized Time-Series Forecasting
- Authors: Jaebin Lee, Hankook Lee,
- Abstract summary: We propose TimePerceiver, a unified encoder-decoder forecasting framework.<n>We first generalize the forecasting task to include diverse temporal prediction objectives.<n>For encoding, we introduce a set of latent bottleneck representations that can interact with all input segments.<n>For decoding, we leverage learnable queries corresponding to target timestamps to effectively retrieve relevant information.
- Score: 8.272371466979058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In machine learning, effective modeling requires a holistic consideration of how to encode inputs, make predictions (i.e., decoding), and train the model. However, in time-series forecasting, prior work has predominantly focused on encoder design, often treating prediction and training as separate or secondary concerns. In this paper, we propose TimePerceiver, a unified encoder-decoder forecasting framework that is tightly aligned with an effective training strategy. To be specific, we first generalize the forecasting task to include diverse temporal prediction objectives such as extrapolation, interpolation, and imputation. Since this generalization requires handling input and target segments that are arbitrarily positioned along the temporal axis, we design a novel encoder-decoder architecture that can flexibly perceive and adapt to these varying positions. For encoding, we introduce a set of latent bottleneck representations that can interact with all input segments to jointly capture temporal and cross-channel dependencies. For decoding, we leverage learnable queries corresponding to target timestamps to effectively retrieve relevant information. Extensive experiments demonstrate that our framework consistently and significantly outperforms prior state-of-the-art baselines across a wide range of benchmark datasets. The code is available at https://github.com/efficient-learning-lab/TimePerceiver.
Related papers
- Streaming Real-Time Trajectory Prediction Using Endpoint-Aware Modeling [54.94692733670454]
Future trajectories of neighboring traffic agents have a significant influence on the path planning and decision-making of autonomous vehicles.<n>We propose a lightweight yet highly accurate streaming-based trajectory forecasting approach.<n>Our approach significantly reduces inference latency, making it well-suited for real-world deployment.
arXiv Detail & Related papers (2026-03-02T13:44:23Z) - A Unified Frequency Domain Decomposition Framework for Interpretable and Robust Time Series Forecasting [81.73338008264115]
Current approaches for time series forecasting, whether in the time or frequency domain, predominantly use deep learning models based on linear layers or transformers.<n>We propose FIRE, a unified frequency domain decomposition framework that provides a mathematical abstraction for diverse types of time series.<n>Fire consistently outperforms state-of-the-art models on long-term forecasting benchmarks.
arXiv Detail & Related papers (2025-10-11T09:59:25Z) - Decomposing the Time Series Forecasting Pipeline: A Modular Approach for Time Series Representation, Information Extraction, and Projection [2.5216923314390733]
Time series forecasting remains a challenging task, demanding effective sequence representation, meaningful information extraction, and precise future projection.<n>This work decomposes the time series forecasting pipeline into three core stages: input sequence representation, information extraction and memory construction, and final target projection.<n>Our models achieve state-of-the-art forecasting accuracy while greatly enhancing computational efficiency, with reduced training and inference times and a lower parameter count.
arXiv Detail & Related papers (2025-07-08T11:26:42Z) - BinConv: A Neural Architecture for Ordinal Encoding in Time-Series Forecasting [5.827431686047649]
We propose textbfBinConv, a fully convolutional neural network architecture designed for probabilistic forecasting.<n>BinConv achieves superior performance compared to widely used baseline datasets in both point and probabilistic forecasting.
arXiv Detail & Related papers (2025-05-30T13:41:39Z) - Forecasting Clinical Risk from Textual Time Series: Structuring Narratives for Temporal AI in Healthcare [3.2957337131930484]
We introduce the forecasting problem from textual time series, where timestamped clinical findings serve as the primary input for prediction.<n>We evaluate a diverse suite of models, including fine-tuned decoder-based large language models and encoder-based transformers.
arXiv Detail & Related papers (2025-04-14T15:48:56Z) - TimeCAP: Learning to Contextualize, Augment, and Predict Time Series Events with Large Language Model Agents [52.13094810313054]
TimeCAP is a time-series processing framework that creatively employs Large Language Models (LLMs) as contextualizers of time series data.<n>TimeCAP incorporates two independent LLM agents: one generates a textual summary capturing the context of the time series, while the other uses this enriched summary to make more informed predictions.<n> Experimental results on real-world datasets demonstrate that TimeCAP outperforms state-of-the-art methods for time series event prediction.
arXiv Detail & Related papers (2025-02-17T04:17:27Z) - Temporal Context Consistency Above All: Enhancing Long-Term Anticipation by Learning and Enforcing Temporal Constraints [4.880243880711163]
This paper proposes a method for predicting action labels and their duration in a video given the observation of an initial untrimmed video interval.<n>We build on an encoder-decoder architecture with parallel decoding and make two key contributions.<n>We validate our methods on four benchmark datasets for LTA, the EpicKitchen-55, EGTEA+, 50Salads and Breakfast.
arXiv Detail & Related papers (2024-12-27T03:29:10Z) - Enhancing Foundation Models for Time Series Forecasting via Wavelet-based Tokenization [74.3339999119713]
We develop a wavelet-based tokenizer that allows models to learn complex representations directly in the space of time-localized frequencies.<n>Our method first scales and decomposes the input time series, then thresholds and quantizes the wavelet coefficients, and finally pre-trains an autoregressive model to forecast coefficients for the forecast horizon.
arXiv Detail & Related papers (2024-12-06T18:22:59Z) - Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a causal Transformer for unified time series forecasting.<n>Based on large-scale pre-training, Timer-XL achieves state-of-the-art zero-shot performance.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - Towards Anytime Classification in Early-Exit Architectures by Enforcing
Conditional Monotonicity [5.425028186820756]
Anytime algorithms are well-suited to environments in which computational budgets are dynamic.
We show that current early-exit networks are not directly applicable to anytime settings.
We propose an elegant post-hoc modification, based on the Product-of-Experts, that encourages an early-exit network to become gradually confident.
arXiv Detail & Related papers (2023-06-05T07:38:13Z) - Representation Learning for Sequence Data with Deep Autoencoding
Predictive Components [96.42805872177067]
We propose a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space.
We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between past and future windows at each time step.
We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.
arXiv Detail & Related papers (2020-10-07T03:34:01Z) - Conditional Mutual information-based Contrastive Loss for Financial Time
Series Forecasting [12.0855096102517]
We present a representation learning framework for financial time series forecasting.
In this paper, we propose to first learn compact representations from time series data, then use the learned representations to train a simpler model for predicting time series movements.
arXiv Detail & Related papers (2020-02-18T15:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.