Time Series as Images: Vision Transformer for Irregularly Sampled Time
Series
- URL: http://arxiv.org/abs/2303.12799v2
- Date: Mon, 30 Oct 2023 22:16:01 GMT
- Title: Time Series as Images: Vision Transformer for Irregularly Sampled Time
Series
- Authors: Zekun Li, Shiyang Li, Xifeng Yan
- Abstract summary: This paper introduces a novel perspective by converting irregularly sampled time series into line graph images.
We then utilize powerful pre-trained vision transformers for time series classification in the same way as image classification.
Remarkably, despite its simplicity, our approach outperforms state-of-the-art specialized algorithms on several popular healthcare and human activity datasets.
- Score: 32.99466250557855
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Irregularly sampled time series are increasingly prevalent, particularly in
medical domains. While various specialized methods have been developed to
handle these irregularities, effectively modeling their complex dynamics and
pronounced sparsity remains a challenge. This paper introduces a novel
perspective by converting irregularly sampled time series into line graph
images, then utilizing powerful pre-trained vision transformers for time series
classification in the same way as image classification. This method not only
largely simplifies specialized algorithm designs but also presents the
potential to serve as a universal framework for time series modeling.
Remarkably, despite its simplicity, our approach outperforms state-of-the-art
specialized algorithms on several popular healthcare and human activity
datasets. Especially in the rigorous leave-sensors-out setting where a portion
of variables is omitted during testing, our method exhibits strong robustness
against varying degrees of missing observations, achieving an impressive
improvement of 42.8% in absolute F1 score points over leading specialized
baselines even with half the variables masked. Code and data are available at
https://github.com/Leezekun/ViTST
Related papers
- Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts [103.725112190618]
This paper introduces Moirai-MoE, using a single input/output projection layer while delegating the modeling of diverse time series patterns to the sparse mixture of experts.
Extensive experiments on 39 datasets demonstrate the superiority of Moirai-MoE over existing foundation models in both in-distribution and zero-shot scenarios.
arXiv Detail & Related papers (2024-10-14T13:01:11Z) - Training-Free Time-Series Anomaly Detection: Leveraging Image Foundation Models [0.0]
We propose an image-based, training-free time-series anomaly detection (ITF-TAD) approach.
ITF-TAD converts time-series data into images using wavelet transform and compresses them into a single representation, leveraging image foundation models for anomaly detection.
arXiv Detail & Related papers (2024-08-27T03:12:08Z) - From Pixels to Predictions: Spectrogram and Vision Transformer for Better Time Series Forecasting [15.234725654622135]
Time series forecasting plays a crucial role in decision-making across various domains.
Recent studies have explored image-driven approaches using computer vision models to address these challenges.
We propose a novel approach that uses time-frequency spectrograms as the visual representation of time series data.
arXiv Detail & Related papers (2024-03-17T00:14:29Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Compatible Transformer for Irregularly Sampled Multivariate Time Series [75.79309862085303]
We propose a transformer-based encoder to achieve comprehensive temporal-interaction feature learning for each individual sample.
We conduct extensive experiments on 3 real-world datasets and validate that the proposed CoFormer significantly and consistently outperforms existing methods.
arXiv Detail & Related papers (2023-10-17T06:29:09Z) - TimesNet: Temporal 2D-Variation Modeling for General Time Series
Analysis [80.56913334060404]
Time series analysis is of immense importance in applications, such as weather forecasting, anomaly detection, and action recognition.
Previous methods attempt to accomplish this directly from the 1D time series.
We ravel out the complex temporal variations into the multiple intraperiod- and interperiod-variations.
arXiv Detail & Related papers (2022-10-05T12:19:51Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Multi-Time Attention Networks for Irregularly Sampled Time Series [18.224344440110862]
Irregular sampling occurs in many time series modeling applications.
We propose a new deep learning framework for this setting that we call Multi-Time Attention Networks.
Our results show that our approach performs as well or better than a range of baseline and recently proposed models.
arXiv Detail & Related papers (2021-01-25T18:57:42Z) - Learning from Irregularly-Sampled Time Series: A Missing Data
Perspective [18.493394650508044]
Irregularly-sampled time series occur in many domains including healthcare.
We model irregularly-sampled time series data as a sequence of index-value pairs sampled from a continuous but unobserved function.
We propose learning methods for this framework based on variational autoencoders and generative adversarial networks.
arXiv Detail & Related papers (2020-08-17T20:01:55Z) - Temporal signals to images: Monitoring the condition of industrial
assets with deep learning image processing algorithms [3.9023554886892438]
This paper reviews the signal to image encoding approaches found in the literature.
We propose modifications to some of their original formulations to make them more robust to the variability in large datasets.
The selected encoding methods are Gramian Angular Field, Markov Transition Field, recurrence plot, grey scale encoding, spectrogram, and scalogram.
arXiv Detail & Related papers (2020-05-14T14:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.