Noisy Linear Convergence of Stochastic Gradient Descent for CV@R
Statistical Learning under Polyak-{\L}ojasiewicz Conditions
- URL: http://arxiv.org/abs/2012.07785v3
- Date: Tue, 19 Jan 2021 02:17:57 GMT
- Title: Noisy Linear Convergence of Stochastic Gradient Descent for CV@R
Statistical Learning under Polyak-{\L}ojasiewicz Conditions
- Authors: Dionysios S. Kalogerias
- Abstract summary: Conditional Value-at-Risk ($mathrmCV@R$) is one of the most popular measures of risk.
We prove that $mathrmCV@R$ can be used as a performance criterion in supervised statistical learning.
- Score: 4.721069729610892
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conditional Value-at-Risk ($\mathrm{CV@R}$) is one of the most popular
measures of risk, which has been recently considered as a performance criterion
in supervised statistical learning, as it is related to desirable operational
features in modern applications, such as safety, fairness, distributional
robustness, and prediction error stability. However, due to its variational
definition, $\mathrm{CV@R}$ is commonly believed to result in difficult
optimization problems, even for smooth and strongly convex loss functions. We
disprove this statement by establishing noisy (i.e., fixed-accuracy) linear
convergence of stochastic gradient descent for sequential $\mathrm{CV@R}$
learning, for a large class of not necessarily strongly-convex (or even convex)
loss functions satisfying a set-restricted Polyak-Lojasiewicz inequality. This
class contains all smooth and strongly convex losses, confirming that classical
problems, such as linear least squares regression, can be solved efficiently
under the $\mathrm{CV@R}$ criterion, just as their risk-neutral versions. Our
results are illustrated numerically on such a risk-aware ridge regression task,
also verifying their validity in practice.
Related papers
- Large-Scale Non-convex Stochastic Constrained Distributionally Robust Optimization [23.029511473335145]
This paper focuses on constrained DRO, which has an explicit characterization of the robustness of its performance.
The complexity of our algorithm at each $chi2$-divergences point$ is independent overall dataset size, and thus is suitable for large-scale applications.
arXiv Detail & Related papers (2024-04-01T15:56:58Z) - Distributionally Robust Optimization with Bias and Variance Reduction [9.341215359733601]
We show that Prospect, a gradient-based algorithm, enjoys linear convergence for smooth regularized losses.
We also show that Prospect can converge 2-3$times$ faster than baselines such as gradient-based methods.
arXiv Detail & Related papers (2023-10-21T00:03:54Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - Randomized Coordinate Subgradient Method for Nonsmooth Composite
Optimization [11.017632675093628]
Coordinate-type subgradient methods for addressing nonsmooth problems are relatively underexplored due to the set of properties of the Lipschitz-type assumption.
arXiv Detail & Related papers (2022-06-30T02:17:11Z) - Optimal Rates for Random Order Online Optimization [60.011653053877126]
We study the citetgarber 2020online, where the loss functions may be chosen by an adversary, but are then presented online in a uniformly random order.
We show that citetgarber 2020online algorithms achieve the optimal bounds and significantly improve their stability.
arXiv Detail & Related papers (2021-06-29T09:48:46Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Localization, Convexity, and Star Aggregation [0.0]
Offset Rademacher complexities have been shown to imply sharp, linear-dependent upper bounds for the square loss.
We show that in the statistical setting, the offset bound can be generalized to any loss satisfying certain uniform convexity.
arXiv Detail & Related papers (2021-05-19T00:47:59Z) - Large-Scale Methods for Distributionally Robust Optimization [53.98643772533416]
We prove that our algorithms require a number of evaluations gradient independent of training set size and number of parameters.
Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9--36 times more efficient than full-batch methods.
arXiv Detail & Related papers (2020-10-12T17:41:44Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z) - Statistical Learning with Conditional Value at Risk [35.4968603057034]
We propose a risk-averse statistical learning framework wherein the performance of a learning algorithm is evaluated by the conditional value-at-risk (CVaR) of losses rather than the expected loss.
arXiv Detail & Related papers (2020-02-14T00:58:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.