Distributionally Time-Varying Online Stochastic Optimization under
Polyak-{\L}ojasiewicz Condition with Application in Conditional Value-at-Risk
Statistical Learning
- URL: http://arxiv.org/abs/2309.09411v1
- Date: Mon, 18 Sep 2023 00:47:08 GMT
- Title: Distributionally Time-Varying Online Stochastic Optimization under
Polyak-{\L}ojasiewicz Condition with Application in Conditional Value-at-Risk
Statistical Learning
- Authors: Yuen-Man Pun, Farhad Farokhi, Iman Shames
- Abstract summary: We consider a sequence of optimization problems following a time-varying distribution via the lens of online optimization.
We show that the framework can be applied to the Conditional Value-at-Risk (CVaR) learning problem.
- Score: 9.749745086213215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we consider a sequence of stochastic optimization problems
following a time-varying distribution via the lens of online optimization.
Assuming that the loss function satisfies the Polyak-{\L}ojasiewicz condition,
we apply online stochastic gradient descent and establish its dynamic regret
bound that is composed of cumulative distribution drifts and cumulative
gradient biases caused by stochasticity. The distribution metric we adopt here
is Wasserstein distance, which is well-defined without the absolute continuity
assumption or with a time-varying support set. We also establish a regret bound
of online stochastic proximal gradient descent when the objective function is
regularized. Moreover, we show that the above framework can be applied to the
Conditional Value-at-Risk (CVaR) learning problem. Particularly, we improve an
existing proof on the discovery of the PL condition of the CVaR problem,
resulting in a regret bound of online stochastic gradient descent.
Related papers
- Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - Gradient-Variation Online Learning under Generalized Smoothness [56.38427425920781]
gradient-variation online learning aims to achieve regret guarantees that scale with variations in gradients of online functions.
Recent efforts in neural network optimization suggest a generalized smoothness condition, allowing smoothness to correlate with gradient norms.
We provide the applications for fast-rate convergence in games and extended adversarial optimization.
arXiv Detail & Related papers (2024-08-17T02:22:08Z) - Online Non-Stationary Stochastic Quasar-Convex Optimization [1.9244735303181755]
Recent research has shown that quasar activation function can be found in applications such as identification of linear systems and logistic systems.
We develop algorithms that exploit quasar-inspired design problems in a dynamic environment.
arXiv Detail & Related papers (2024-07-04T03:24:27Z) - Risk-averse Learning with Non-Stationary Distributions [18.15046585146849]
In this paper, we investigate risk-averse online optimization where the distribution of the random cost changes over time.
We minimize risk-averse objective function using the Conditional Value at Risk (CVaR) as risk measure.
We show that our designed learning algorithm achieves sub-linear dynamic regret with high probability for both convex and strongly convex functions.
arXiv Detail & Related papers (2024-04-03T18:16:47Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - Optimal Rates for Random Order Online Optimization [60.011653053877126]
We study the citetgarber 2020online, where the loss functions may be chosen by an adversary, but are then presented online in a uniformly random order.
We show that citetgarber 2020online algorithms achieve the optimal bounds and significantly improve their stability.
arXiv Detail & Related papers (2021-06-29T09:48:46Z) - On the Sample Complexity and Metastability of Heavy-tailed Policy Search
in Continuous Control [47.71156648737803]
Reinforcement learning is a framework for interactive decision-making with incentives sequentially revealed across time without a system dynamics model.
We characterize a defined defined chain, identifying that policies associated with Levy Processes of a tail index yield to wider peaks.
arXiv Detail & Related papers (2021-06-15T20:12:44Z) - GTAdam: Gradient Tracking with Adaptive Momentum for Distributed Online
Optimization [4.103281325880475]
This paper deals with a network of computing agents aiming to solve an online optimization problem in a distributed fashion, by means of local computation and communication, without any central coordinator.
We propose the gradient tracking with adaptive momentum estimation (GTAdam) distributed algorithm, which combines a gradient tracking mechanism with first and second order momentum estimates of the gradient.
In these numerical experiments from multi-agent learning, GTAdam outperforms state-of-the-art distributed optimization methods.
arXiv Detail & Related papers (2020-09-03T15:20:21Z) - Dynamic Regret of Convex and Smooth Functions [93.71361250701075]
We investigate online convex optimization in non-stationary environments.
We choose the dynamic regret as the performance measure.
We show that it is possible to further enhance the dynamic regret by exploiting the smoothness condition.
arXiv Detail & Related papers (2020-07-07T14:10:57Z) - Online Stochastic Convex Optimization: Wasserstein Distance Variation [15.313864176694832]
We consider an online proximal-gradient method to track the minimizers of expectations of smooth convex functions.
We revisit the concepts of estimation and tracking error inspired by systems and control literature.
We provide bounds for them under strong convexity, Lipschitzness of the gradient, and bounds on the probability distribution drift.
arXiv Detail & Related papers (2020-06-02T05:23:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.