Boosted Embeddings for Time Series Forecasting
- URL: http://arxiv.org/abs/2104.04781v1
- Date: Sat, 10 Apr 2021 14:38:11 GMT
- Title: Boosted Embeddings for Time Series Forecasting
- Authors: Sankeerth Rao Karingula and Nandini Ramanan and Rasool Tahsambi and
Mehrnaz Amjadi and Deokwoo Jung and Ricky Si and Charanraj Thimmisetty and
Claudionor Nunes Coelho Jr
- Abstract summary: We propose a novel time series forecast model, DeepGB.
We formulate and implement a variant of Gradient boosting wherein the weak learners are DNNs whose weights are incrementally found in a greedy manner over iterations.
We demonstrate that our model outperforms existing comparable state-of-the-art models using real-world sensor data and public dataset.
- Score: 0.6042845803090501
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series forecasting is a fundamental task emerging from diverse
data-driven applications. Many advanced autoregressive methods such as ARIMA
were used to develop forecasting models. Recently, deep learning based methods
such as DeepAr, NeuralProphet, Seq2Seq have been explored for time series
forecasting problem. In this paper, we propose a novel time series forecast
model, DeepGB. We formulate and implement a variant of Gradient boosting
wherein the weak learners are DNNs whose weights are incrementally found in a
greedy manner over iterations. In particular, we develop a new embedding
architecture that improves the performance of many deep learning models on time
series using Gradient boosting variant. We demonstrate that our model
outperforms existing comparable state-of-the-art models using real-world sensor
data and public dataset.
Related papers
- Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Learning Robust Precipitation Forecaster by Temporal Frame Interpolation [65.5045412005064]
We develop a robust precipitation forecasting model that demonstrates resilience against spatial-temporal discrepancies.
Our approach has led to significant improvements in forecasting precision, culminating in our model securing textit1st place in the transfer learning leaderboard of the textitWeather4cast'23 competition.
arXiv Detail & Related papers (2023-11-30T08:22:08Z) - Deep Double Descent for Time Series Forecasting: Avoiding Undertrained
Models [1.7243216387069678]
We investigate deep double descent in several Transformer models trained on public time series data sets.
We achieve state-of-the-art results for long sequence time series forecasting in nearly 70% of the 72 benchmarks tested.
This suggests that many models in the literature may possess untapped potential.
arXiv Detail & Related papers (2023-11-02T17:55:41Z) - Lag-Llama: Towards Foundation Models for Probabilistic Time Series
Forecasting [54.04430089029033]
We present Lag-Llama, a general-purpose foundation model for time series forecasting based on a decoder-only transformer architecture.
Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities.
When fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-12T12:29:32Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z) - Unified Long-Term Time-Series Forecasting Benchmark [0.6526824510982802]
We present a comprehensive dataset designed explicitly for long-term time-series forecasting.
We incorporate a collection of datasets obtained from diverse, dynamic systems and real-life records.
To determine the most effective model in diverse scenarios, we conduct an extensive benchmarking analysis using classical and state-of-the-art models.
Our findings reveal intriguing performance comparisons among these models, highlighting the dataset-dependent nature of model effectiveness.
arXiv Detail & Related papers (2023-09-27T18:59:00Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Do We Really Need Deep Learning Models for Time Series Forecasting? [4.2698418800007865]
Time series forecasting is a crucial task in machine learning, as it has a wide range of applications.
Deep learning and matrix factorization models have been recently proposed to tackle the same problem with more competitive performance.
In this paper, we try to answer whether these highly complex deep learning models are without alternative.
arXiv Detail & Related papers (2021-01-06T16:18:04Z) - Improving the Accuracy of Global Forecasting Models using Time Series
Data Augmentation [7.38079566297881]
Forecasting models that are trained across sets of many time series, known as Global Forecasting Models (GFM), have shown promising results in forecasting competitions and real-world applications.
We propose a novel, data augmentation based forecasting framework that is capable of improving the baseline accuracy of GFM models in less data-abundant settings.
arXiv Detail & Related papers (2020-08-06T13:52:20Z) - The Effectiveness of Discretization in Forecasting: An Empirical Study
on Neural Time Series Models [15.281725756608981]
We investigate the effect of data input and output transformations on the predictive performance of neural forecasting architectures.
We find that binning almost always improves performance compared to using normalized real-valued inputs.
arXiv Detail & Related papers (2020-05-20T15:09:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.