On tracking varying bounds when forecasting bounded time series
- URL: http://arxiv.org/abs/2306.13428v1
- Date: Fri, 23 Jun 2023 10:44:49 GMT
- Title: On tracking varying bounds when forecasting bounded time series
- Authors: Amandine Pierrot and Pierre Pinson
- Abstract summary: We consider a new framework where, though bounded, random variable has unobserved bounds that vary over time.
We introduce a loglike estimation to track the bound online likelihood estimation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider a new framework where a continuous, though bounded, random
variable has unobserved bounds that vary over time. In the context of
univariate time series, we look at the bounds as parameters of the distribution
of the bounded random variable. We introduce an extended log-likelihood
estimation and design algorithms to track the bound through online maximum
likelihood estimation. Since the resulting optimization problem is not convex,
we make use of recent theoretical results on Normalized Gradient Descent (NGD)
for quasiconvex optimization, to eventually derive an Online Normalized
Gradient Descent algorithm. We illustrate and discuss the workings of our
approach based on both simulation studies and a real-world wind power
forecasting problem.
Related papers
- Interacting Particle Langevin Algorithm for Maximum Marginal Likelihood
Estimation [2.53740603524637]
We develop a class of interacting particle systems for implementing a maximum marginal likelihood estimation procedure.
In particular, we prove that the parameter marginal of the stationary measure of this diffusion has the form of a Gibbs measure.
Using a particular rescaling, we then prove geometric ergodicity of this system and bound the discretisation error.
in a manner that is uniform in time and does not increase with the number of particles.
arXiv Detail & Related papers (2023-03-23T16:50:08Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Accelerated and instance-optimal policy evaluation with linear function
approximation [17.995515643150657]
Existing algorithms fail to match at least one of these lower bounds.
We develop an accelerated, variance-reduced fast temporal difference algorithm that simultaneously matches both lower bounds and attains a strong notion of instance-optimality.
arXiv Detail & Related papers (2021-12-24T17:21:04Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Stochastic Gradient Langevin with Delayed Gradients [29.6870062491741]
We show that the rate of convergence in measure is not significantly affected by the error caused by the delayed gradient information used for computation.
We show that the rate of convergence in measure is not significantly affected by the error caused by the delayed gradient information used for computation, suggesting significant potential for speedup in wall clock time.
arXiv Detail & Related papers (2020-06-12T17:51:30Z) - Online Stochastic Convex Optimization: Wasserstein Distance Variation [15.313864176694832]
We consider an online proximal-gradient method to track the minimizers of expectations of smooth convex functions.
We revisit the concepts of estimation and tracking error inspired by systems and control literature.
We provide bounds for them under strong convexity, Lipschitzness of the gradient, and bounds on the probability distribution drift.
arXiv Detail & Related papers (2020-06-02T05:23:22Z) - Online Convex Optimization with Binary Constraints [0.04170934882758551]
We consider online optimization with binary decision variables and convex loss functions.
We design a new algorithm, binary online gradient descent (bOGD) and bound its expected dynamic regret.
We test the performance of bOGD in several simulations based on demand response.
arXiv Detail & Related papers (2020-05-05T15:09:26Z) - Is Temporal Difference Learning Optimal? An Instance-Dependent Analysis [102.29671176698373]
We address the problem of policy evaluation in discounted decision processes, and provide Markov-dependent guarantees on the $ell_infty$error under a generative model.
We establish both and non-asymptotic versions of local minimax lower bounds for policy evaluation, thereby providing an instance-dependent baseline by which to compare algorithms.
arXiv Detail & Related papers (2020-03-16T17:15:28Z) - Time-varying Gaussian Process Bandit Optimization with Non-constant
Evaluation Time [93.6788993843846]
We propose a novel time-varying Bayesian optimization algorithm that can effectively handle the non-constant evaluation time.
Our bound elucidates that a pattern of the evaluation time sequence can hugely affect the difficulty of the problem.
arXiv Detail & Related papers (2020-03-10T13:28:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.