Non-convergence to global minimizers for Adam and stochastic gradient
descent optimization and constructions of local minimizers in the training of
artificial neural networks
- URL: http://arxiv.org/abs/2402.05155v1
- Date: Wed, 7 Feb 2024 16:14:04 GMT
- Title: Non-convergence to global minimizers for Adam and stochastic gradient
descent optimization and constructions of local minimizers in the training of
artificial neural networks
- Authors: Arnulf Jentzen, Adrian Riekert
- Abstract summary: It remains an open problem to rigorously explain why SGD methods seem to succeed to train ANNs.
We prove that SGD methods can find a global minimizer with high probability.
Even stronger, we reveal in the training of such ANNs that SGD methods do with high probability fail to converge to global minimizers.
- Score: 6.708125191843434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stochastic gradient descent (SGD) optimization methods such as the plain
vanilla SGD method and the popular Adam optimizer are nowadays the method of
choice in the training of artificial neural networks (ANNs). Despite the
remarkable success of SGD methods in the ANN training in numerical simulations,
it remains in essentially all practical relevant scenarios an open problem to
rigorously explain why SGD methods seem to succeed to train ANNs. In
particular, in most practically relevant supervised learning problems, it seems
that SGD methods do with high probability not converge to global minimizers in
the optimization landscape of the ANN training problem. Nevertheless, it
remains an open problem of research to disprove the convergence of SGD methods
to global minimizers. In this work we solve this research problem in the
situation of shallow ANNs with the rectified linear unit (ReLU) and related
activations with the standard mean square error loss by disproving in the
training of such ANNs that SGD methods (such as the plain vanilla SGD, the
momentum SGD, the AdaGrad, the RMSprop, and the Adam optimizers) can find a
global minimizer with high probability. Even stronger, we reveal in the
training of such ANNs that SGD methods do with high probability fail to
converge to global minimizers in the optimization landscape. The findings of
this work do, however, not disprove that SGD methods succeed to train ANNs
since they do not exclude the possibility that SGD methods find good local
minimizers whose risk values are close to the risk values of the global
minimizers. In this context, another key contribution of this work is to
establish the existence of a hierarchical structure of local minimizers with
distinct risk values in the optimization landscape of ANN training problems
with ReLU and related activations.
Related papers
- Training Deep Learning Models with Norm-Constrained LMOs [56.00317694850397]
We study optimization methods that leverage the linear minimization oracle (LMO) over a norm-ball.
We propose a new family of algorithms that uses the LMO to adapt to the geometry of the problem and, perhaps surprisingly, show that they can be applied to unconstrained problems.
arXiv Detail & Related papers (2025-02-11T13:10:34Z) - Stability and Generalization for Distributed SGDA [70.97400503482353]
We propose the stability-based generalization analytical framework for Distributed-SGDA.
We conduct a comprehensive analysis of stability error, generalization gap, and population risk across different metrics.
Our theoretical results reveal the trade-off between the generalization gap and optimization error.
arXiv Detail & Related papers (2024-11-14T11:16:32Z) - Non-convergence to global minimizers in data driven supervised deep learning: Adam and stochastic gradient descent optimization provably fail to converge to global minimizers in the training of deep neural networks with ReLU activation [3.6185342807265415]
It remains an open problem of research to explain the success and the limitations of SGD methods in rigorous theoretical terms.
In this work we prove for a large class of SGD methods that the considered does with high probability not converge to global minimizers of the optimization problem.
The general non-convergence results of this work do not only apply to the plain vanilla standard SGD method but also to a large class of accelerated and adaptive SGD methods.
arXiv Detail & Related papers (2024-10-14T14:11:37Z) - Non-convergence of Adam and other adaptive stochastic gradient descent optimization methods for non-vanishing learning rates [3.6185342807265415]
Deep learning algorithms are the key ingredients in many artificial intelligence (AI) systems.
Deep learning algorithms are typically consisting of a class of deep neural networks trained by a gradient descent (SGD) optimization method.
arXiv Detail & Related papers (2024-07-11T00:10:35Z) - The Limits and Potentials of Local SGD for Distributed Heterogeneous Learning with Intermittent Communication [37.210933391984014]
Local SGD is a popular optimization method in distributed learning, often outperforming other algorithms in practice.
We provide new lower bounds for local SGD under existing first-order data heterogeneity assumptions.
We also show the min-max optimality of accelerated mini-batch SGD for several problem classes.
arXiv Detail & Related papers (2024-05-19T20:20:03Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD
for Communication Efficient Nonconvex Distributed Learning [58.79085525115987]
Local methods are one of the promising approaches to reduce communication time.
We show that the communication complexity is better than non-local methods when the local datasets is smaller than the smoothness local loss.
arXiv Detail & Related papers (2022-02-12T15:12:17Z) - Convergence proof for stochastic gradient descent in the training of
deep neural networks with ReLU activation for constant target functions [1.7149364927872015]
gradient descent (SGD) type optimization methods perform very effectively in the training of deep neural networks (DNNs)
In this work we study SGD type optimization methods in the training of fully-connected feedforward DNNs with rectified linear unit (ReLU) activation.
arXiv Detail & Related papers (2021-12-13T11:45:36Z) - Local Stochastic Gradient Descent Ascent: Convergence Analysis and
Communication Efficiency [15.04034188283642]
Local SGD is a promising approach to overcome the communication overhead in distributed learning.
We show that local SGDA can provably optimize distributed minimax problems in both homogeneous and heterogeneous data.
arXiv Detail & Related papers (2021-02-25T20:15:18Z) - TaylorGAN: Neighbor-Augmented Policy Update for Sample-Efficient Natural
Language Generation [79.4205462326301]
TaylorGAN is a novel approach to score function-based natural language generation.
It augments the gradient estimation by off-policy update and the first-order Taylor expansion.
It enables us to train NLG models from scratch with smaller batch size.
arXiv Detail & Related papers (2020-11-27T02:26:15Z) - Detached Error Feedback for Distributed SGD with Random Sparsification [98.98236187442258]
Communication bottleneck has been a critical problem in large-scale deep learning.
We propose a new distributed error feedback (DEF) algorithm, which shows better convergence than error feedback for non-efficient distributed problems.
We also propose DEFA to accelerate the generalization of DEF, which shows better bounds than DEF.
arXiv Detail & Related papers (2020-04-11T03:50:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.