Online estimation methods for irregular autoregressive models
- URL: http://arxiv.org/abs/2302.10785v1
- Date: Tue, 31 Jan 2023 19:52:04 GMT
- Title: Online estimation methods for irregular autoregressive models
- Authors: Felipe Elorrieta, Lucas Osses, Matias C\'aceres, Susana Eyheramendy
and Wilfredo Palma
- Abstract summary: Currently available methods for addressing this problem, the so-called online learning methods, use current parameter estimations and novel data to update the estimators.
In this work we consider three online learning algorithms for parameters estimation in the context of time series models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the last decades, due to the huge technological growth observed, it has
become increasingly common that a collection of temporal data rapidly
accumulates in vast amounts. This provides an opportunity for extracting
valuable information through the estimation of increasingly precise models. But
at the same time it imposes the challenge of continuously updating the models
as new data become available.
Currently available methods for addressing this problem, the so-called online
learning methods, use current parameter estimations and novel data to update
the estimators. These approaches avoid using the full raw data and speeding up
the computations.
In this work we consider three online learning algorithms for parameters
estimation in the context of time series models. In particular, the methods
implemented are: gradient descent, Newton-step and Kalman filter recursions.
These algorithms are applied to the recently developed irregularly observed
autoregressive (iAR) model. The estimation accuracy of the proposed methods is
assessed by means of Monte Carlo experiments.
The results obtained show that the proposed online estimation methods allow
for a precise estimation of the parameters that generate the data both for the
regularly and irregularly observed time series. These online approaches are
numerically efficient, allowing substantial computational time savings.
Moreover, we show that the proposed methods are able to adapt the parameter
estimates quickly when the time series behavior changes, unlike batch
estimation methods.
Related papers
- Computation-Aware Gaussian Processes: Model Selection And Linear-Time Inference [55.150117654242706]
We show that model selection for computation-aware GPs trained on 1.8 million data points can be done within a few hours on a single GPU.
As a result of this work, Gaussian processes can be trained on large-scale datasets without significantly compromising their ability to quantify uncertainty.
arXiv Detail & Related papers (2024-11-01T21:11:48Z) - Towards Learning Stochastic Population Models by Gradient Descent [0.0]
We show that simultaneous estimation of parameters and structure poses major challenges for optimization procedures.
We demonstrate accurate estimation of models but find that enforcing the inference of parsimonious, interpretable models drastically increases the difficulty.
arXiv Detail & Related papers (2024-04-10T14:38:58Z) - Safe Active Learning for Time-Series Modeling with Gaussian Processes [7.505622158856545]
Learning time-series models is useful for many applications, such as simulation and forecasting.
In this study, we consider the problem of actively learning time-series models while taking given safety constraints into account.
The proposed approach generates data appropriate for time series model learning, i.e. input and output trajectories, by dynamically exploring the input space.
arXiv Detail & Related papers (2024-02-09T09:40:33Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Better Batch for Deep Probabilistic Time Series Forecasting [15.31488551912888]
We propose an innovative training method that incorporates error autocorrelation to enhance probabilistic forecasting accuracy.
Our method constructs a mini-batch as a collection of $D$ consecutive time series segments for model training.
It explicitly learns a time-varying covariance matrix over each mini-batch, encoding error correlation among adjacent time steps.
arXiv Detail & Related papers (2023-05-26T15:36:59Z) - Towards black-box parameter estimation [0.0]
We develop new black-box procedures to estimate parameters of statistical models based on weak parameter structure assumptions.
For well-structured likelihoods with frequent occurrences, this is achieved by pre-training a deep neural network on an extensive simulated database.
arXiv Detail & Related papers (2023-03-27T09:39:38Z) - Fast and Robust Online Inference with Stochastic Gradient Descent via
Random Scaling [0.9806910643086042]
We develop a new method of online inference for a vector of parameters estimated by the Polyak-Rtupper averaging procedure of gradient descent algorithms.
Our approach is fully operational with online data and is rigorously underpinned by a functional central limit theorem.
arXiv Detail & Related papers (2021-06-06T15:38:37Z) - Real-Time Regression with Dividing Local Gaussian Processes [62.01822866877782]
Local Gaussian processes are a novel, computationally efficient modeling approach based on Gaussian process regression.
Due to an iterative, data-driven division of the input space, they achieve a sublinear computational complexity in the total number of training points in practice.
A numerical evaluation on real-world data sets shows their advantages over other state-of-the-art methods in terms of accuracy as well as prediction and update speed.
arXiv Detail & Related papers (2020-06-16T18:43:31Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.