IMTS is Worth Time $\times$ Channel Patches: Visual Masked Autoencoders for Irregular Multivariate Time Series Prediction
- URL: http://arxiv.org/abs/2505.22815v2
- Date: Fri, 30 May 2025 02:28:59 GMT
- Title: IMTS is Worth Time $\times$ Channel Patches: Visual Masked Autoencoders for Irregular Multivariate Time Series Prediction
- Authors: Zhangyi Hu, Jiemin Wu, Hua Xu, Mingqian Liao, Ninghui Feng, Bo Gao, Songning Lai, Yutao Yue,
- Abstract summary: We propose VIMTS, a framework adapting Visual MAE for IMTS forecasting.<n>To mitigate the effect of missing values, VIMTS first processes IMTS along the timeline into feature patches at equal intervals.<n>It then leverages visual MAE's capability in handling sparse multichannel data for patch reconstruction, followed by a coarse-to-fine technique to generate precise predictions.
- Score: 9.007111482874135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Irregular Multivariate Time Series (IMTS) forecasting is challenging due to the unaligned nature of multi-channel signals and the prevalence of extensive missing data. Existing methods struggle to capture reliable temporal patterns from such data due to significant missing values. While pre-trained foundation models show potential for addressing these challenges, they are typically designed for Regularly Sampled Time Series (RTS). Motivated by the visual Mask AutoEncoder's (MAE) powerful capability for modeling sparse multi-channel information and its success in RTS forecasting, we propose VIMTS, a framework adapting Visual MAE for IMTS forecasting. To mitigate the effect of missing values, VIMTS first processes IMTS along the timeline into feature patches at equal intervals. These patches are then complemented using learned cross-channel dependencies. Then it leverages visual MAE's capability in handling sparse multichannel data for patch reconstruction, followed by a coarse-to-fine technique to generate precise predictions from focused contexts. In addition, we integrate self-supervised learning for improved IMTS modeling by adapting the visual MAE to IMTS data. Extensive experiments demonstrate VIMTS's superior performance and few-shot capability, advancing the application of visual foundation models in more general time series tasks. Our code is available at https://github.com/WHU-HZY/VIMTS.
Related papers
- Multi-Scale Finetuning for Encoder-based Time Series Foundation Models [56.503053716053]
Time series foundation models (TSFMs) demonstrate impressive zero-shot performance for time series forecasting.<n>We argue that it falls short of fully leveraging TSFMs' capabilities, often resulting in overfitting and suboptimal performance.<n>We propose textbftextscfinetextbftextsctuning (textbfMSFT), a simple yet general framework that explicitly integrates multi-scale modeling into the finetuning process.
arXiv Detail & Related papers (2025-06-17T01:06:01Z) - Time Series Representations for Classification Lie Hidden in Pretrained Vision Transformers [49.07665715422702]
We propose Time Vision Transformer (TiViT), a framework that converts time series into images.<n>We show that TiViT achieves state-of-the-art performance on standard time series classification benchmarks.<n>Our findings reveal a new direction for reusing vision representations in a non-visual domain.
arXiv Detail & Related papers (2025-06-10T09:54:51Z) - HyperIMTS: Hypergraph Neural Network for Irregular Multivariate Time Series Forecasting [24.29827089303662]
Irregular multivariate time series (IMTS) are characterized by irregular time intervals within variables and unaligned observations across variables.<n>We propose HyperIMTS, a Hypergraph neural network for IMTS forecasting.<n> Experiments demonstrate HyperIMTS's competitive performance among state-of-the-art models in IMTS forecasting with low computational cost.
arXiv Detail & Related papers (2025-05-23T03:27:04Z) - A Time Series Multitask Framework Integrating a Large Language Model, Pre-Trained Time Series Model, and Knowledge Graph [1.3654846342364308]
Time series analysis is crucial in fields like finance, transportation, and industry.<n>This paper proposes a novel time series multitask framework, called LTM, which integrates temporal features with textual descriptions.<n> Experiments on benchmark datasets show that LTM significantly outperforms existing methods.
arXiv Detail & Related papers (2025-03-10T11:25:01Z) - IMTS-Mixer: Mixer-Networks for Irregular Multivariate Time Series Forecasting [5.854515369288696]
We introduce IMTS-Mixer, a novel forecasting architecture designed specifically for IMTS.<n>Our approach retains the core principles of TS mixer models while introducing innovative methods to transform IMTS into fixed-size matrix representations.<n>Our results demonstrate that IMTS-Mixer establishes a new state-of-the-art in forecasting accuracy while also improving computational efficiency.
arXiv Detail & Related papers (2025-02-17T14:06:36Z) - UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting [98.12558945781693]
We propose a transformer-based model UniTST containing a unified attention mechanism on the flattened patch tokens.
Although our proposed model employs a simple architecture, it offers compelling performance as shown in our experiments on several datasets for time series forecasting.
arXiv Detail & Related papers (2024-06-07T14:39:28Z) - UniTS: A Unified Multi-Task Time Series Model [31.675845788410246]
UniTS is a unified multi-task time series model that integrates predictive and generative tasks into a single framework.
UniTS is tested on 38 datasets across human activity sensors, healthcare, engineering, and finance.
arXiv Detail & Related papers (2024-02-29T21:25:58Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - Ti-MAE: Self-Supervised Masked Time Series Autoencoders [16.98069693152999]
We propose a novel framework named Ti-MAE, in which the input time series are assumed to follow an integrate distribution.
Ti-MAE randomly masks out embedded time series data and learns an autoencoder to reconstruct them at the point-level.
Experiments on several public real-world datasets demonstrate that our framework of masked autoencoding could learn strong representations directly from the raw data.
arXiv Detail & Related papers (2023-01-21T03:20:23Z) - LIFE: Learning Individual Features for Multivariate Time Series
Prediction with Missing Values [71.52335136040664]
We propose a Learning Individual Features (LIFE) framework, which provides a new paradigm for MTS prediction with missing values.
LIFE generates reliable features for prediction by using the correlated dimensions as auxiliary information and suppressing the interference from uncorrelated dimensions with missing values.
Experiments on three real-world data sets verify the superiority of LIFE to existing state-of-the-art models.
arXiv Detail & Related papers (2021-09-30T04:53:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.