Local and Global Uniform Convexity Conditions
- URL: http://arxiv.org/abs/2102.05134v1
- Date: Tue, 9 Feb 2021 21:09:53 GMT
- Title: Local and Global Uniform Convexity Conditions
- Authors: Thomas Kerdreux, Alexandre d'Aspremont, and Sebastian Pokutta
- Abstract summary: We review various characterizations of uniform convexity and smoothness on norm balls in finite-dimensional spaces.
We establish local versions of these conditions to provide sharper insights on a recent body of complexity results in learning theory, online learning, or offline optimization.
We conclude with some practical examples in optimization and machine learning where leveraging these conditions and localized assumptions lead to new complexity results.
- Score: 88.3673525964507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We review various characterizations of uniform convexity and smoothness on
norm balls in finite-dimensional spaces and connect results stemming from the
geometry of Banach spaces with \textit{scaling inequalities} used in analysing
the convergence of optimization methods. In particular, we establish local
versions of these conditions to provide sharper insights on a recent body of
complexity results in learning theory, online learning, or offline
optimization, which rely on the strong convexity of the feasible set. While
they have a significant impact on complexity, these strong convexity or uniform
convexity properties of feasible sets are not exploited as thoroughly as their
functional counterparts, and this work is an effort to correct this imbalance.
We conclude with some practical examples in optimization and machine learning
where leveraging these conditions and localized assumptions lead to new
complexity results.
Related papers
- Projection-Free Variance Reduction Methods for Stochastic Constrained Multi-Level Compositional Optimization [34.628967272528044]
This paper investigates projection-free algorithms for constrained multi-level optimization functions.
By using a stage-wise adaptation, we obtain complexities for convex and strongly convex functions.
arXiv Detail & Related papers (2024-06-06T06:56:56Z) - Low-Rank Extragradient Methods for Scalable Semidefinite Optimization [0.0]
We focus on high-dimensional and plausible settings in which the problem admits a low-rank solution.
We provide several theoretical results proving that, under these circumstances, the well-known Extragradient method converges to a solution of the constrained optimization problem.
arXiv Detail & Related papers (2024-02-14T10:48:00Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - On The Convergence of First Order Methods for Quasar-Convex Optimization [1.52292571922932]
In recent years, the success of deep learning has inspired many researchers to study general smooth non-sarsar functions.
In this paper, we study the convergence of first methods in a variety of different different settings.
arXiv Detail & Related papers (2020-10-10T08:16:32Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.