From Images to Signals: Are Large Vision Models Useful for Time Series Analysis?
- URL: http://arxiv.org/abs/2505.24030v1
- Date: Thu, 29 May 2025 22:05:28 GMT
- Title: From Images to Signals: Are Large Vision Models Useful for Time Series Analysis?
- Authors: Ziming Zhao, ChengAo Shen, Hanghang Tong, Dongjin Song, Zhigang Deng, Qingsong Wen, Jingchao Ni,
- Abstract summary: Transformer-based models have gained increasing attention in time series research.<n>As the field moves toward multi-modality, Large Vision Models (LVMs) are emerging as a promising direction.
- Score: 62.58235852194057
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transformer-based models have gained increasing attention in time series research, driving interest in Large Language Models (LLMs) and foundation models for time series analysis. As the field moves toward multi-modality, Large Vision Models (LVMs) are emerging as a promising direction. In the past, the effectiveness of Transformer and LLMs in time series has been debated. When it comes to LVMs, a similar question arises: are LVMs truely useful for time series analysis? To address it, we design and conduct the first principled study involving 4 LVMs, 8 imaging methods, 18 datasets and 26 baselines across both high-level (classification) and low-level (forecasting) tasks, with extensive ablation analysis. Our findings indicate LVMs are indeed useful for time series classification but face challenges in forecasting. Although effective, the contemporary best LVM forecasters are limited to specific types of LVMs and imaging methods, exhibit a bias toward forecasting periods, and have limited ability to utilize long look-back windows. We hope our findings could serve as a cornerstone for future research on LVM- and multimodal-based solutions to different time series tasks.
Related papers
- Harnessing Vision-Language Models for Time Series Anomaly Detection [9.257985820123]
Time-series anomaly detection (TSAD) has played a vital role in a variety of fields, including healthcare, finance, and industrial monitoring.<n>Prior methods, which mainly focus on training domain-specific models on numerical data, lack the visual-temporal reasoning capacity that human experts have to identify contextual anomalies.<n>We propose a two-stage solution, with ViT4TS, a vision-screening stage built on a relatively lightweight pretrained vision encoder, and VLM4TS, a VLM-based stage that integrates global temporal context and VLM reasoning capacity.
arXiv Detail & Related papers (2025-06-07T15:27:30Z) - Efficient Model Selection for Time Series Forecasting via LLMs [52.31535714387368]
We propose to leverage Large Language Models (LLMs) as a lightweight alternative for model selection.<n>Our method eliminates the need for explicit performance matrices by utilizing the inherent knowledge and reasoning capabilities of LLMs.
arXiv Detail & Related papers (2025-04-02T20:33:27Z) - Can Multimodal LLMs Perform Time Series Anomaly Detection? [55.534264764673296]
We propose VisualTimeAnomaly benchmark to evaluate MLLMs in time series anomaly detection (TSAD)<n>Our approach transforms time series numerical data into the image format and feed these images into various MLLMs.<n>In total, VisualTimeAnomaly contains 12.4k time series images spanning 3 scenarios and 3 anomaly granularities with 9 anomaly types across 8 MLLMs.
arXiv Detail & Related papers (2025-02-25T03:37:43Z) - Harnessing Vision Models for Time Series Analysis: A Survey [72.09716244582684]
This survey discusses the advantages of vision models over LLMs in time series analysis.<n>It provides a comprehensive and in-depth overview of the existing methods, with dual views of detailed taxonomy.<n>We address the challenges in the pre- and post-processing steps involved in this framework.
arXiv Detail & Related papers (2025-02-13T00:42:11Z) - Large Language Models are Few-shot Multivariate Time Series Classifiers [23.045734479292356]
Large Language Models (LLMs) have been extensively applied in time series analysis.<n>Yet, their utility in the few-shot classification (i.e., a crucial training scenario) is underexplored.<n>We aim to leverage the extensive pre-trained knowledge in LLMs to overcome the data scarcity problem.
arXiv Detail & Related papers (2025-01-30T03:59:59Z) - Empowering Time Series Analysis with Large Language Models: A Survey [24.202539098675953]
We provide a systematic overview of methods that leverage large language models for time series analysis.
Specifically, we first state the challenges and motivations of applying language models in the context of time series.
Next, we categorize existing methods into different groups (i.e., direct query, tokenization, prompt design, fine-tune, and model integration) and highlight the key ideas within each group.
arXiv Detail & Related papers (2024-02-05T16:46:35Z) - Position: What Can Large Language Models Tell Us about Time Series Analysis [69.70906014827547]
We argue that current large language models (LLMs) have the potential to revolutionize time series analysis.
Such advancement could unlock a wide range of possibilities, including time series modality switching and question answering.
arXiv Detail & Related papers (2024-02-05T04:17:49Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - Latent Variable Models in the Era of Industrial Big Data: Extension and
Beyond [7.361977372607915]
latent variable models (LVMs) and their counterparts account for a major share and play a vital role in many industrial modeling areas.
We propose a novel concept called lightweight deep LVM (LDLVM)
arXiv Detail & Related papers (2022-08-23T09:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.