A Survey on Time-Series Pre-Trained Models
- URL: http://arxiv.org/abs/2305.10716v1
- Date: Thu, 18 May 2023 05:27:46 GMT
- Title: A Survey on Time-Series Pre-Trained Models
- Authors: Qianli Ma, Zhen Liu, Zhenjing Zheng, Ziyang Huang, Siying Zhu,
Zhongzhong Yu, and James T. Kwok
- Abstract summary: Time-Series Mining (TSM) shows great potential in practical applications.
Deep learning models that rely on massive labeled data have been utilized for TSM successfully.
Recently, Pre-Trained Models have gradually attracted attention in the time series domain.
- Score: 34.98332094625603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time-Series Mining (TSM) is an important research area since it shows great
potential in practical applications. Deep learning models that rely on massive
labeled data have been utilized for TSM successfully. However, constructing a
large-scale well-labeled dataset is difficult due to data annotation costs.
Recently, Pre-Trained Models have gradually attracted attention in the time
series domain due to their remarkable performance in computer vision and
natural language processing. In this survey, we provide a comprehensive review
of Time-Series Pre-Trained Models (TS-PTMs), aiming to guide the understanding,
applying, and studying TS-PTMs. Specifically, we first briefly introduce the
typical deep learning models employed in TSM. Then, we give an overview of
TS-PTMs according to the pre-training techniques. The main categories we
explore include supervised, unsupervised, and self-supervised TS-PTMs. Further,
extensive experiments are conducted to analyze the advantages and disadvantages
of transfer learning strategies, Transformer-based models, and representative
TS-PTMs. Finally, we point out some potential directions of TS-PTMs for future
work.
Related papers
- Understanding Different Design Choices in Training Large Time Series Models [71.20102277299445]
Training Large Time Series Models (LTSMs) on heterogeneous time series data poses unique challenges.
We propose emphtime series prompt, a novel statistical prompting strategy tailored to time series data.
We introduce textttLTSM-bundle, which bundles the best design choices we have identified.
arXiv Detail & Related papers (2024-06-20T07:09:19Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Large Pre-trained time series models for cross-domain Time series analysis tasks [20.228846068418765]
We propose a novel method of textitadaptive segmentation that automatically identifies optimal dataset-specific segmentation strategy during pre-training.
This enables LPTM to perform similar to or better than domain-specific state-of-art model when fine-tuned to different downstream time-series analysis tasks and under zero-shot settings.
arXiv Detail & Related papers (2023-11-19T20:16:16Z) - Exploring Progress in Multivariate Time Series Forecasting:
Comprehensive Benchmarking and Heterogeneity Analysis [72.18987459587682]
We introduce BasicTS, a benchmark designed for fair comparisons in MTS forecasting.
We highlight the heterogeneity among MTS datasets and classify them based on temporal and spatial characteristics.
arXiv Detail & Related papers (2023-10-09T19:52:22Z) - TRAM: Benchmarking Temporal Reasoning for Large Language Models [12.112914393948415]
We introduce TRAM, a temporal reasoning benchmark composed of ten datasets.
We evaluate popular language models like GPT-4 and Llama2 in zero-shot and few-shot scenarios.
Our findings indicate that the best-performing model lags significantly behind human performance.
arXiv Detail & Related papers (2023-10-02T00:59:07Z) - Large-scale Multi-Modal Pre-trained Models: A Comprehensive Survey [66.18478838828231]
Multi-modal pre-trained big models have drawn more and more attention in recent years.
This paper introduces the background of multi-modal pre-training by reviewing the conventional deep, pre-training works in natural language process, computer vision, and speech.
Then, we introduce the task definition, key challenges, and advantages of multi-modal pre-training models (MM-PTMs), and discuss the MM-PTMs with a focus on data, objectives, network, and knowledge enhanced pre-training.
arXiv Detail & Related papers (2023-02-20T15:34:03Z) - Modeling Time-Series and Spatial Data for Recommendations and Other
Applications [1.713291434132985]
We address the problems that may arise due to the poor quality of CTES data being fed into a recommender system.
To improve the quality of the CTES data, we address a fundamental problem of overcoming missing events in temporal sequences.
We extend their abilities to design solutions for large-scale CTES retrieval and human activity prediction.
arXiv Detail & Related papers (2022-12-25T09:34:15Z) - Pre-training Enhanced Spatial-temporal Graph Neural Network for
Multivariate Time Series Forecasting [13.441945545904504]
We propose a novel framework, in which STGNN is Enhanced by a scalable time series Pre-training model (STEP)
Specifically, we design a pre-training model to efficiently learn temporal patterns from very long-term history time series.
Our framework is capable of significantly enhancing downstream STGNNs, and our pre-training model aptly captures temporal patterns.
arXiv Detail & Related papers (2022-06-18T04:24:36Z) - Pre-Trained Models: Past, Present and Future [126.21572378910746]
Large-scale pre-trained models (PTMs) have recently achieved great success and become a milestone in the field of artificial intelligence (AI)
By storing knowledge into huge parameters and fine-tuning on specific tasks, the rich knowledge implicitly encoded in huge parameters can benefit a variety of downstream tasks.
It is now the consensus of the AI community to adopt PTMs as backbone for downstream tasks rather than learning models from scratch.
arXiv Detail & Related papers (2021-06-14T02:40:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.