ViTime: A Visual Intelligence-Based Foundation Model for Time Series Forecasting
- URL: http://arxiv.org/abs/2407.07311v3
- Date: Sat, 08 Feb 2025 05:05:56 GMT
- Title: ViTime: A Visual Intelligence-Based Foundation Model for Time Series Forecasting
- Authors: Luoxiao Yang, Yun Wang, Xinqi Fan, Israel Cohen, Jingdong Chen, Yue Zhao, Zijun Zhang,
- Abstract summary: Time series forecasting (TSF) possesses great practical values in various fields, including power and energy, transportation, etc.
This paper offers a pioneering study in developing a TSF foundation model and proposes a vision intelligence-powered framework, ViTime, for the first time.
- Score: 38.87384888881476
- License:
- Abstract: Time series forecasting (TSF) possesses great practical values in various fields, including power and energy, transportation, etc. TSF methods have been studied based on knowledge from classical statistics to modern deep learning. Yet, all of them were developed based on one fundamental concept, the numerical data fitting. Thus, the models developed have been long known for being problem-specific and lacking application generalizability. A TSF foundation model serving TSF tasks across different applications can reverse such an impression. The central question is then how to develop such a TSF foundation model. This paper offers a pioneering study in developing a TSF foundation model and proposes a vision intelligence-powered framework, ViTime, for the first time. In ViTime, a method synthesizing authentic time series periodic and trend patterns is developed to enrich sample pattern diversity. A deep architecture operating TSF in image metric space is designed to achieve significantly enhanced TSF generalizability. Extensive experiments demonstrate ViTime's SOTA performance across multiple settings. In zero-shot scenarios, ViTime outperforms TimesFM by 9-15%. With just 10% fine-tuning data, ViTime surpasses both foundation models and fully-supervised benchmarks trained on complete datasets, with this performance gap widening further at 100\% fine-tuning. Additionally, ViTime exhibits exceptional robustness, handling missing data without imputation and outperforming TimesFM by 20-30% under various data perturbations.
Related papers
- General Time-series Model for Universal Knowledge Representation of Multivariate Time-Series data [61.163542597764796]
We show that time series with different time granularities (or corresponding frequency resolutions) exhibit distinct joint distributions in the frequency domain.
A novel Fourier knowledge attention mechanism is proposed to enable learning time-aware representations from both the temporal and frequency domains.
An autoregressive blank infilling pre-training framework is incorporated to time series analysis for the first time, leading to a generative tasks agnostic pre-training strategy.
arXiv Detail & Related papers (2025-02-05T15:20:04Z) - FoundTS: Comprehensive and Unified Benchmarking of Foundation Models for Time Series Forecasting [44.33565276128137]
Time Series Forecasting (TSF) is key functionality in numerous fields, including in finance, weather services, and energy management.
Foundation models exhibit promising inferencing capabilities in new or unseen data.
We propose a new benchmark, FoundTS, to enable thorough and fair evaluation and comparison of such models.
arXiv Detail & Related papers (2024-10-15T17:23:49Z) - Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts [103.725112190618]
This paper introduces Moirai-MoE, using a single input/output projection layer while delegating the modeling of diverse time series patterns to the sparse mixture of experts.
Extensive experiments on 39 datasets demonstrate the superiority of Moirai-MoE over existing foundation models in both in-distribution and zero-shot scenarios.
arXiv Detail & Related papers (2024-10-14T13:01:11Z) - GIFT-Eval: A Benchmark For General Time Series Forecasting Model Evaluation [90.53485251837235]
Time series foundation models excel in zero-shot forecasting, handling diverse tasks without explicit training.
GIFT-Eval is a pioneering benchmark aimed at promoting evaluation across diverse datasets.
GIFT-Eval encompasses 23 datasets over 144,000 time series and 177 million data points.
arXiv Detail & Related papers (2024-10-14T11:29:38Z) - VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters [27.80286758290421]
This paper explores a new road to building a TSF foundation model from rich, high-quality natural images.
By reformulating TSF as an image reconstruction task, we bridge the gap between image pre-training and TSF downstream tasks.
The proposed VisionTS could achieve better zero-shot forecast performance than existing TSF foundation models.
arXiv Detail & Related papers (2024-08-30T12:51:55Z) - Deep Time Series Models: A Comprehensive Survey and Benchmark [74.28364194333447]
Time series data is of great significance in real-world scenarios.
Recent years have witnessed remarkable breakthroughs in the time series community.
We release Time Series Library (TSLib) as a fair benchmark of deep time series models for diverse analysis tasks.
arXiv Detail & Related papers (2024-07-18T08:31:55Z) - Understanding Different Design Choices in Training Large Time Series Models [71.20102277299445]
Training Large Time Series Models (LTSMs) on heterogeneous time series data poses unique challenges.
We propose emphtime series prompt, a novel statistical prompting strategy tailored to time series data.
We introduce textttLTSM-bundle, which bundles the best design choices we have identified.
arXiv Detail & Related papers (2024-06-20T07:09:19Z) - TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting [24.834846119163885]
We propose a novel framework, TEMPO, that can effectively learn time series representations.
TEMPO expands the capability for dynamically modeling real-world temporal phenomena from data within diverse domains.
arXiv Detail & Related papers (2023-10-08T00:02:25Z) - FrAug: Frequency Domain Augmentation for Time Series Forecasting [6.508992154478217]
Data augmentation (DA) has become a de facto solution to expand training data size for deep learning.
This paper proposes simple yet effective frequency domain augmentation techniques that ensure the semantic consistency of augmented data-label pairs in forecasting.
Our results show that FrAug can boost the forecasting accuracy of TSF models in most cases.
arXiv Detail & Related papers (2023-02-18T11:25:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.