Stochastic Online Convex Optimization. Application to probabilistic time
series forecasting
- URL: http://arxiv.org/abs/2102.00729v3
- Date: Fri, 21 Apr 2023 12:58:41 GMT
- Title: Stochastic Online Convex Optimization. Application to probabilistic time
series forecasting
- Authors: Olivier Wintenberger (LPSM (UMR\_8001))
- Abstract summary: We prove algorithms such as online newton steps and a scale-free 10 version of Bernstein online achieve best-known rates in unbounded settings.
Our fast-rate regret bounds are any-time valid.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a general framework of stochastic online convex optimization to
obtain fast-rate stochastic regret bounds. We prove that algorithms such as
online newton steps and a scale-free 10 version of Bernstein online aggregation
achieve best-known rates in unbounded stochastic settings. We apply our
approach to calibrate parametric probabilistic forecasters of non-stationary
sub-gaussian time series. Our fast-rate stochastic regret bounds are any-time
valid. Our proofs combine self-bounded and Poissonnian inequalities for
martingales and sub-gaussian random variables, respectively, under a stochastic
exp-concavity assumption.
Related papers
- Asymptotic Time-Uniform Inference for Parameters in Averaged Stochastic Approximation [23.89036529638614]
We study time-uniform statistical inference for parameters in approximation (SA)
We analyze the almost-sure convergence rates of the averaged iterates to a scaled sum of Gaussians in both linear and nonlinear SA problems.
arXiv Detail & Related papers (2024-10-19T10:27:26Z) - Stochastic Weakly Convex Optimization Beyond Lipschitz Continuity [5.866816093792934]
We show that a wide class of continuity algorithms, including the subgradient method, preserve the $mathO convergence rate with constant failure rate.
Our analyses rest on rather weak assumptions: the Lipschitz parameter can be either bounded by a general growth function of $|x|$ or locally estimated through independent random samples.
arXiv Detail & Related papers (2024-01-25T06:06:31Z) - Stochastic-Constrained Stochastic Optimization with Markovian Data [2.1756081703276]
We study the setting where data samples are drawn from a Markov chain and thus are not independent and identically distributed.
We propose two variants of drift-plus-penalty; one is for the case when the mixing time of the underlying Markov chain is known.
Our algorithms apply to a more general setting of constrained online convex optimization where the sequence of constraint functions follows a Markov chain.
arXiv Detail & Related papers (2023-12-07T14:09:27Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - Statistical Inference of Constrained Stochastic Optimization via Sketched Sequential Quadratic Programming [53.63469275932989]
We consider online statistical inference of constrained nonlinear optimization problems.
We apply the Sequential Quadratic Programming (StoSQP) method to solve these problems.
arXiv Detail & Related papers (2022-05-27T00:34:03Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - Optimal Rates for Random Order Online Optimization [60.011653053877126]
We study the citetgarber 2020online, where the loss functions may be chosen by an adversary, but are then presented online in a uniformly random order.
We show that citetgarber 2020online algorithms achieve the optimal bounds and significantly improve their stability.
arXiv Detail & Related papers (2021-06-29T09:48:46Z) - The Randomized Elliptical Potential Lemma with an Application to Linear
Thompson Sampling [10.939683083130616]
We introduce a randomized version of the well-known elliptical potential lemma that is widely used in the analysis of algorithms in sequential learning and decision-making problems such as linear bandits.
Our randomized elliptical potential lemma relaxes the Gaussian assumption on the observation noise and on the prior distribution of the problem parameters.
arXiv Detail & Related papers (2021-02-16T07:30:04Z) - Heteroscedasticity-aware residuals-based contextual stochastic
optimization [0.0]
We explore generalizations of some integrated learning and optimization frameworks for data-driven contextual optimization.
We identify conditions on the program, data generation process, and the prediction setup under which these generalizations possess and finite sample guarantees.
arXiv Detail & Related papers (2021-01-08T18:11:21Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.