An Improved Online Penalty Parameter Selection Procedure for
$\ell_1$-Penalized Autoregressive with Exogenous Variables
- URL: http://arxiv.org/abs/2010.07594v1
- Date: Thu, 15 Oct 2020 08:32:27 GMT
- Title: An Improved Online Penalty Parameter Selection Procedure for
$\ell_1$-Penalized Autoregressive with Exogenous Variables
- Authors: William B. Nicholson, Xiaohan Yan
- Abstract summary: The lasso serves to regularize and provide feature selection.
The most popular penalty parameter selection approaches that respect time dependence are very computationally intensive.
We propose enhancing a canonical time series model with a novel online penalty parameter selection procedure.
- Score: 1.472161528588343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many recent developments in the high-dimensional statistical time series
literature have centered around time-dependent applications that can be adapted
to regularized least squares. Of particular interest is the lasso, which both
serves to regularize and provide feature selection. The lasso requires the
specification of a penalty parameter that determines the degree of sparsity to
impose. The most popular penalty parameter selection approaches that respect
time dependence are very computationally intensive and are not appropriate for
modeling certain classes of time series. We propose enhancing a canonical time
series model, the autoregressive model with exogenous variables, with a novel
online penalty parameter selection procedure that takes advantage of the
sequential nature of time series data to improve both computational performance
and forecast accuracy relative to existing methods in both a simulation and
empirical application involving macroeconomic indicators.
Related papers
- Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a generative Transformer for unified time series forecasting.
Timer-XL achieves state-of-the-art performance across challenging forecasting benchmarks through a unified approach.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - Integration of Mamba and Transformer -- MAT for Long-Short Range Time Series Forecasting with Application to Weather Dynamics [7.745945701278489]
Long-short range time series forecasting is essential for predicting future trends and patterns over extended periods.
Deep learning models such as Transformers have made significant strides in advancing time series forecasting.
This article examines the advantages and disadvantages of both Mamba and Transformer models.
arXiv Detail & Related papers (2024-09-13T04:23:54Z) - Parsimony or Capability? Decomposition Delivers Both in Long-term Time Series Forecasting [46.63798583414426]
Long-term time series forecasting (LTSF) represents a critical frontier in time series analysis.
Our study demonstrates, through both analytical and empirical evidence, that decomposition is key to containing excessive model inflation.
Remarkably, by tailoring decomposition to the intrinsic dynamics of time series data, our proposed model outperforms existing benchmarks.
arXiv Detail & Related papers (2024-01-22T13:15:40Z) - TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series [57.4208255711412]
Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS)
We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks.
arXiv Detail & Related papers (2023-10-02T16:45:19Z) - Time Series Continuous Modeling for Imputation and Forecasting with Implicit Neural Representations [15.797295258800638]
We introduce a novel modeling approach for time series imputation and forecasting, tailored to address the challenges often encountered in real-world data.
Our method relies on a continuous-time-dependent model of the series' evolution dynamics.
A modulation mechanism, driven by a meta-learning algorithm, allows adaptation to unseen samples and extrapolation beyond observed time-windows.
arXiv Detail & Related papers (2023-06-09T13:20:04Z) - Parameterization of state duration in Hidden semi-Markov Models: an
application in electrocardiography [0.0]
We introduce a parametric model for time series pattern recognition and provide a maximum-likelihood estimation of its parameters.
An application on classification reveals the main strengths and weaknesses of each alternative.
arXiv Detail & Related papers (2022-11-17T11:51:35Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Triformer: Triangular, Variable-Specific Attentions for Long Sequence
Multivariate Time Series Forecasting--Full Version [50.43914511877446]
We propose a triangular, variable-specific attention to ensure high efficiency and accuracy.
We show that Triformer outperforms state-of-the-art methods w.r.t. both accuracy and efficiency.
arXiv Detail & Related papers (2022-04-28T20:41:49Z) - Time varying regression with hidden linear dynamics [74.9914602730208]
We revisit a model for time-varying linear regression that assumes the unknown parameters evolve according to a linear dynamical system.
Counterintuitively, we show that when the underlying dynamics are stable the parameters of this model can be estimated from data by combining just two ordinary least squares estimates.
arXiv Detail & Related papers (2021-12-29T23:37:06Z) - Optimal Latent Space Forecasting for Large Collections of Short Time
Series Using Temporal Matrix Factorization [0.0]
It is a common practice to evaluate multiple methods and choose one of these methods or an ensemble for producing the best forecasts.
We propose a framework for forecasting short high-dimensional time series data by combining low-rank temporal matrix factorization and optimal model selection on latent time series.
arXiv Detail & Related papers (2021-12-15T11:39:21Z) - Learning summary features of time series for likelihood free inference [93.08098361687722]
We present a data-driven strategy for automatically learning summary features from time series data.
Our results indicate that learning summary features from data can compete and even outperform LFI methods based on hand-crafted values.
arXiv Detail & Related papers (2020-12-04T19:21:37Z) - Time series forecasting with Gaussian Processes needs priors [1.5877673959068452]
We propose an optimal kernel and reliable estimation of the hyper parameters.
We present results on many time series of different types; our GP model is more accurate than state-of-the-art time series models.
arXiv Detail & Related papers (2020-09-17T06:46:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.