Grasping Core Rules of Time Series through Pure Models
- URL: http://arxiv.org/abs/2208.07105v1
- Date: Mon, 15 Aug 2022 10:22:15 GMT
- Title: Grasping Core Rules of Time Series through Pure Models
- Authors: Gedi Liu, Yifeng Jiang, Yi Ouyang, Keyang Zhong, Yang Wang
- Abstract summary: PureTS is a network with three pure linear layers that achieved state-of-the-art in 80% of the long sequence prediction tasks.
We discuss the potential of pure linear layers in both phenomena and essence.
- Score: 6.849905754473385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time series underwent the transition from statistics to deep learning, as did
many other machine learning fields. Although it appears that the accuracy has
been increasing as the model is updated in a number of publicly available
datasets, it typically only increases the scale by several times in exchange
for a slight difference in accuracy. Through this experiment, we point out a
different line of thinking, time series, especially long-term forecasting, may
differ from other fields. It is not necessary to use extensive and complex
models to grasp all aspects of time series, but to use pure models to grasp the
core rules of time series changes. With this simple but effective idea, we
created PureTS, a network with three pure linear layers that achieved
state-of-the-art in 80% of the long sequence prediction tasks while being
nearly the lightest model and having the fastest running speed. On this basis,
we discuss the potential of pure linear layers in both phenomena and essence.
The ability to understand the core law contributes to the high precision of
long-distance prediction, and reasonable fluctuation prevents it from
distorting the curve in multi-step prediction like mainstream deep learning
models, which is summarized as a pure linear neural network that avoids
over-fluctuating. Finally, we suggest the fundamental design standards for
lightweight long-step time series tasks: input and output should try to have
the same dimension, and the structure avoids fragmentation and complex
operations.
Related papers
- Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Spatiotemporal-Linear: Towards Universal Multivariate Time Series
Forecasting [10.404951989266191]
We introduce the Spatio-Temporal- Linear (STL) framework.
STL seamlessly integrates time-embedded and spatially-informed bypasses to augment the Linear-based architecture.
Empirical evidence highlights STL's prowess, outpacing both Linear and Transformer benchmarks across varied observation and prediction durations and datasets.
arXiv Detail & Related papers (2023-12-22T17:46:34Z) - TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series [57.4208255711412]
Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS)
We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks.
arXiv Detail & Related papers (2023-10-02T16:45:19Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - Multivariate Time Series Forecasting with Dynamic Graph Neural ODEs [65.18780403244178]
We propose a continuous model to forecast Multivariate Time series with dynamic Graph neural Ordinary Differential Equations (MTGODE)
Specifically, we first abstract multivariate time series into dynamic graphs with time-evolving node features and unknown graph structures.
Then, we design and solve a neural ODE to complement missing graph topologies and unify both spatial and temporal message passing.
arXiv Detail & Related papers (2022-02-17T02:17:31Z) - Time Series Anomaly Detection by Cumulative Radon Features [32.36217153362305]
In this work, we argue that shallow features suffice when combined with distribution distance measures.
Our approach models each time series as a high dimensional empirical distribution of features, where each time-point constitutes a single sample.
We show that by parameterizing each time series using cumulative Radon features, we are able to efficiently and effectively model the distribution of normal time series.
arXiv Detail & Related papers (2022-02-08T18:58:53Z) - Adjusting for Autocorrelated Errors in Neural Networks for Time Series
Regression and Forecasting [10.659189276058948]
We learn the autocorrelation coefficient jointly with the model parameters in order to adjust for autocorrelated errors.
For time series regression, large-scale experiments indicate that our method outperforms the Prais-Winsten method.
Results across a wide range of real-world datasets show that our method enhances performance in almost all cases.
arXiv Detail & Related papers (2021-01-28T04:25:51Z) - Improved Predictive Deep Temporal Neural Networks with Trend Filtering [22.352437268596674]
We propose a new prediction framework based on deep neural networks and a trend filtering.
We reveal that the predictive performance of deep temporal neural networks improves when the training data is temporally processed by a trend filtering.
arXiv Detail & Related papers (2020-10-16T08:29:36Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Multivariate Probabilistic Time Series Forecasting via Conditioned
Normalizing Flows [8.859284959951204]
Time series forecasting is fundamental to scientific and engineering problems.
Deep learning methods are well suited for this problem.
We show that it improves over the state-of-the-art for standard metrics on many real-world data sets.
arXiv Detail & Related papers (2020-02-14T16:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.