Feature Programming for Multivariate Time Series Prediction
- URL: http://arxiv.org/abs/2306.06252v1
- Date: Fri, 9 Jun 2023 20:46:55 GMT
- Title: Feature Programming for Multivariate Time Series Prediction
- Authors: Alex Reneau, Jerry Yao-Chieh Hu, Chenwei Xu, Weijian Li, Ammar Gilani,
Han Liu
- Abstract summary: We introduce the concept of programmable feature engineering for time series modeling.
We propose a feature programming framework that generates large amounts of predictive features for noisy time series.
- Score: 7.0220697993232
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduce the concept of programmable feature engineering for time series
modeling and propose a feature programming framework. This framework generates
large amounts of predictive features for noisy multivariate time series while
allowing users to incorporate their inductive bias with minimal effort. The key
motivation of our framework is to view any multivariate time series as a
cumulative sum of fine-grained trajectory increments, with each increment
governed by a novel spin-gas dynamical Ising model. This fine-grained
perspective motivates the development of a parsimonious set of operators that
summarize multivariate time series in an abstract fashion, serving as the
foundation for large-scale automated feature engineering. Numerically, we
validate the efficacy of our method on several synthetic and real-world noisy
time series datasets.
Related papers
- Generalized Prompt Tuning: Adapting Frozen Univariate Time Series Foundation Models for Multivariate Healthcare Time Series [3.9599054392856483]
Time series foundation models are pre-trained on large datasets and are able to achieve state-of-the-art performance in diverse tasks.
We propose a prompt-tuning-inspired fine-tuning technique, Gen-P-Tuning, that enables us to adapt an existing univariate time series foundation model.
We demonstrate the effectiveness of our fine-tuning approach against various baselines on two MIMIC classification tasks, and on influenza-like illness forecasting.
arXiv Detail & Related papers (2024-11-19T19:20:58Z) - Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a generative Transformer for unified time series forecasting.
Timer-XL achieves state-of-the-art performance across challenging forecasting benchmarks through a unified approach.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - Towards Long-Context Time Series Foundation Models [17.224575072056627]
Time series foundation models have shown impressive performance on a variety of tasks, across a wide range of domains, even in zero-shot settings.
This study bridges the gap by systematically comparing various context expansion techniques from both language and time series domains.
arXiv Detail & Related papers (2024-09-20T14:19:59Z) - Motion Code: Robust Time Series Classification and Forecasting via Sparse Variational Multi-Stochastic Processes Learning [3.2857981869020327]
We propose a novel framework that views each time series as a realization of a continuous-time process.
This mathematical approach captures dependencies across timestamps and detects hidden, time-varying signals within the noise.
Experiments on noisy datasets, including real-world Parkinson's disease sensor tracking, demonstrate Motion Code's strong performance against established benchmarks.
arXiv Detail & Related papers (2024-02-21T19:10:08Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting [24.834846119163885]
We propose a novel framework, TEMPO, that can effectively learn time series representations.
TEMPO expands the capability for dynamically modeling real-world temporal phenomena from data within diverse domains.
arXiv Detail & Related papers (2023-10-08T00:02:25Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series [57.4208255711412]
Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS)
We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks.
arXiv Detail & Related papers (2023-10-02T16:45:19Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Multi-Time Attention Networks for Irregularly Sampled Time Series [18.224344440110862]
Irregular sampling occurs in many time series modeling applications.
We propose a new deep learning framework for this setting that we call Multi-Time Attention Networks.
Our results show that our approach performs as well or better than a range of baseline and recently proposed models.
arXiv Detail & Related papers (2021-01-25T18:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.