Adaptive Strategies in Non-convex Optimization
- URL: http://arxiv.org/abs/2306.10278v2
- Date: Fri, 7 Jul 2023 00:18:38 GMT
- Title: Adaptive Strategies in Non-convex Optimization
- Authors: Zhenxun Zhuang
- Abstract summary: An algorithm is said to be adaptive to a certain parameter if it does not need a priori knowledge of such a parameter.
This dissertation presents our work on adaptive algorithms in three scenarios.
- Score: 5.279475826661643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An algorithm is said to be adaptive to a certain parameter (of the problem)
if it does not need a priori knowledge of such a parameter but performs
competitively to those that know it. This dissertation presents our work on
adaptive algorithms in following scenarios: 1. In the stochastic optimization
setting, we only receive stochastic gradients and the level of noise in
evaluating them greatly affects the convergence rate. Tuning is typically
required when without prior knowledge of the noise scale in order to achieve
the optimal rate. Considering this, we designed and analyzed noise-adaptive
algorithms that can automatically ensure (near)-optimal rates under different
noise scales without knowing it. 2. In training deep neural networks, the
scales of gradient magnitudes in each coordinate can scatter across a very wide
range unless normalization techniques, like BatchNorm, are employed. In such
situations, algorithms not addressing this problem of gradient scales can
behave very poorly. To mitigate this, we formally established the advantage of
scale-free algorithms that adapt to the gradient scales and presented its real
benefits in empirical experiments. 3. Traditional analyses in non-convex
optimization typically rely on the smoothness assumption. Yet, this condition
does not capture the properties of some deep learning objective functions,
including the ones involving Long Short-Term Memory networks and Transformers.
Instead, they satisfy a much more relaxed condition, with potentially unbounded
smoothness. Under this condition, we show that a generalized SignSGD algorithm
can theoretically match the best-known convergence rates obtained by SGD with
gradient clipping but does not need explicit clipping at all, and it can
empirically match the performance of Adam and beat others. Moreover, it can
also be made to automatically adapt to the unknown relaxed smoothness.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Gradient-Variation Online Learning under Generalized Smoothness [56.38427425920781]
gradient-variation online learning aims to achieve regret guarantees that scale with variations in gradients of online functions.
Recent efforts in neural network optimization suggest a generalized smoothness condition, allowing smoothness to correlate with gradient norms.
We provide the applications for fast-rate convergence in games and extended adversarial optimization.
arXiv Detail & Related papers (2024-08-17T02:22:08Z) - Random Scaling and Momentum for Non-smooth Non-convex Optimization [38.443430569753026]
Training neural networks requires a loss function that may be highly irregular, and in particular neither convex nor smooth.
Popular training algorithms are based on gradient descent with momentum (SGDM), for which analysis applies only if the loss is either convex or smooth.
arXiv Detail & Related papers (2024-05-16T00:52:03Z) - Robustness to Unbounded Smoothness of Generalized SignSGD [25.07411035728305]
We show that momentum plays a critical role in analyzing SignSGD-type and Adamtype algorithms.
We compare these algorithms with popular tasks, observing that we can match the performance of Adam while beating the others.
arXiv Detail & Related papers (2022-08-23T21:11:19Z) - Greedy versus Map-based Optimized Adaptive Algorithms for
random-telegraph-noise mitigation by spectator qubits [6.305016513788048]
In a scenario where data-storage qubits are kept in isolation as far as possible, noise mitigation can still be done using additional noise probes.
We construct a theoretical model assuming projective measurements on the qubits, and derive the performance of different measurement and control strategies.
We show, analytically and numerically, that MOAAAR outperforms the Greedy algorithm, especially in the regime of high noise sensitivity of SQ.
arXiv Detail & Related papers (2022-05-25T08:25:10Z) - STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization [74.1615979057429]
We investigate non-batch optimization problems where the objective is an expectation over smooth loss functions.
Our work builds on the STORM algorithm, in conjunction with a novel approach to adaptively set the learning rate and momentum parameters.
arXiv Detail & Related papers (2021-11-01T15:43:36Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Self-Tuning Stochastic Optimization with Curvature-Aware Gradient
Filtering [53.523517926927894]
We explore the use of exact per-sample Hessian-vector products and gradients to construct self-tuning quadratics.
We prove that our model-based procedure converges in noisy gradient setting.
This is an interesting step for constructing self-tuning quadratics.
arXiv Detail & Related papers (2020-11-09T22:07:30Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.