Deep Time Series Models: A Comprehensive Survey and Benchmark
- URL: http://arxiv.org/abs/2407.13278v1
- Date: Thu, 18 Jul 2024 08:31:55 GMT
- Title: Deep Time Series Models: A Comprehensive Survey and Benchmark
- Authors: Yuxuan Wang, Haixu Wu, Jiaxiang Dong, Yong Liu, Mingsheng Long, Jianmin Wang,
- Abstract summary: Time series data is of great significance in real-world scenarios.
Recent years have witnessed remarkable breakthroughs in the time series community.
We release Time Series Library (TSLib) as a fair benchmark of deep time series models for diverse analysis tasks.
- Score: 74.28364194333447
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Time series, characterized by a sequence of data points arranged in a discrete-time order, are ubiquitous in real-world applications. Different from other modalities, time series present unique challenges due to their complex and dynamic nature, including the entanglement of nonlinear patterns and time-variant trends. Analyzing time series data is of great significance in real-world scenarios and has been widely studied over centuries. Recent years have witnessed remarkable breakthroughs in the time series community, with techniques shifting from traditional statistical methods to advanced deep learning models. In this paper, we delve into the design of deep time series models across various analysis tasks and review the existing literature from two perspectives: basic modules and model architectures. Further, we develop and release Time Series Library (TSLib) as a fair benchmark of deep time series models for diverse analysis tasks, which implements 24 mainstream models, covers 30 datasets from different domains, and supports five prevalent analysis tasks. Based on TSLib, we thoroughly evaluate 12 advanced deep time series models on different tasks. Empirical results indicate that models with specific structures are well-suited for distinct analytical tasks, which offers insights for research and adoption of deep time series models. Code is available at https://github.com/thuml/Time-Series-Library.
Related papers
- Recent Trends in Modelling the Continuous Time Series using Deep Learning: A Survey [0.18434042562191813]
Continuous-time series is essential for different modern application areas, e.g. healthcare, automobile, energy, finance, Internet of things (IoT)
This paper has described the general problem domain of time series and reviewed the challenges of modelling the continuous time series.
arXiv Detail & Related papers (2024-09-13T14:19:44Z) - Understanding Different Design Choices in Training Large Time Series Models [71.20102277299445]
Training Large Time Series Models (LTSMs) on heterogeneous time series data poses unique challenges.
We propose emphtime series prompt, a novel statistical prompting strategy tailored to time series data.
We introduce textttLTSM-bundle, which bundles the best design choices we have identified.
arXiv Detail & Related papers (2024-06-20T07:09:19Z) - A Survey of Time Series Foundation Models: Generalizing Time Series Representation with Large Language Model [33.17908422599714]
Large language foundation models have unveiled their capabilities for cross-task transferability, zero-shot/few-shot learning, and decision-making explainability.
There are two main research lines, namely pre-training foundation models from scratch for time series and adapting large language foundation models for time series.
This survey offers a 3E analytical framework for comprehensive examination of related research.
arXiv Detail & Related papers (2024-05-03T03:12:55Z) - PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations [49.80959046861793]
We present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers.
Our experimentation across seven diversetemporal real-world LMTF datasets reveals that PDETime adapts effectively to the intrinsic nature of the data.
arXiv Detail & Related papers (2024-02-25T17:39:44Z) - MOMENT: A Family of Open Time-series Foundation Models [19.0845213853369]
We introduce MOMENT, a family of open-source foundation models for general-purpose time series analysis.
We compile a collection of public time series, called the Time series Pile, and systematically tackle time series-specific challenges.
We build on recent work to design a benchmark to evaluate time series foundation models on diverse tasks and datasets in limited supervision settings.
arXiv Detail & Related papers (2024-02-06T10:48:46Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Large Models for Time Series and Spatio-Temporal Data: A Survey and
Outlook [95.32949323258251]
Temporal data, notably time series andtemporal-temporal data, are prevalent in real-world applications.
Recent advances in large language and other foundational models have spurred increased use in time series andtemporal data mining.
arXiv Detail & Related papers (2023-10-16T09:06:00Z) - Unified Long-Term Time-Series Forecasting Benchmark [0.6526824510982802]
We present a comprehensive dataset designed explicitly for long-term time-series forecasting.
We incorporate a collection of datasets obtained from diverse, dynamic systems and real-life records.
To determine the most effective model in diverse scenarios, we conduct an extensive benchmarking analysis using classical and state-of-the-art models.
Our findings reveal intriguing performance comparisons among these models, highlighting the dataset-dependent nature of model effectiveness.
arXiv Detail & Related papers (2023-09-27T18:59:00Z) - Time Series Forecasting With Deep Learning: A Survey [5.351996099005896]
We survey common encoder and decoder designs used in both one-step-ahead and multi-horizon time series forecasting.
We highlight recent developments in hybrid deep learning models, which combine well-studied statistical models with neural network components.
arXiv Detail & Related papers (2020-04-28T10:32:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.