Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with
Linear Convergence Rates
- URL: http://arxiv.org/abs/2205.09860v1
- Date: Thu, 19 May 2022 21:05:40 GMT
- Title: Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with
Linear Convergence Rates
- Authors: Jingwei Zhang and Xunpeng Huang
- Abstract summary: Mean-field regime is a theoretically attractive alternative to the NTK (lazy training) regime.
We establish a new linear convergence result for two-layer neural networks trained by continuous-time noisy descent in the mean-field regime.
- Score: 7.094295642076582
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider optimizing two-layer neural networks in the mean-field regime
where the learning dynamics of network weights can be approximated by the
evolution in the space of probability measures over the weight parameters
associated with the neurons. The mean-field regime is a theoretically
attractive alternative to the NTK (lazy training) regime which is only
restricted locally in the so-called neural tangent kernel space around
specialized initializations. Several prior works (\cite{mei2018mean,
chizat2018global}) establish the asymptotic global optimality of the mean-field
regime, but it is still challenging to obtain a quantitative convergence rate
due to the complicated nonlinearity of the training dynamics. This work
establishes a new linear convergence result for two-layer neural networks
trained by continuous-time noisy gradient descent in the mean-field regime. Our
result relies on a novelty logarithmic Sobolev inequality for two-layer neural
networks, and uniform upper bounds on the logarithmic Sobolev constants for a
family of measures determined by the evolving distribution of hidden neurons.
Related papers
- Stochastic Gradient Descent for Two-layer Neural Networks [2.0349026069285423]
This paper presents a study on the convergence rates of the descent (SGD) algorithm when applied to overparameterized two-layer neural networks.
Our approach combines the Tangent Kernel (NTK) approximation with convergence analysis in the Reproducing Kernel Space (RKHS) generated by NTK.
Our research framework enables us to explore the intricate interplay between kernel methods and optimization processes, shedding light on the dynamics and convergence properties of neural networks.
arXiv Detail & Related papers (2024-07-10T13:58:57Z) - Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - A Convergence Analysis of Nesterov's Accelerated Gradient Method in
Training Deep Linear Neural Networks [21.994004684742812]
Momentum methods are widely used in training networks for their fast trajectory.
We show that the convergence of the random number and $kappaO can converge to the global minimum.
We extend our analysis to deep linear ResNets and derive a similar result.
arXiv Detail & Related papers (2022-04-18T13:24:12Z) - Approximation bounds for norm constrained neural networks with
applications to regression and GANs [9.645327615996914]
We prove upper and lower bounds on the approximation error of ReLU neural networks with norm constraint on the weights.
We apply these approximation bounds to analyze the convergences of regression using norm constrained neural networks and distribution estimation by GANs.
arXiv Detail & Related papers (2022-01-24T02:19:05Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Generalization bound of globally optimal non-convex neural network
training: Transportation map estimation by infinite dimensional Langevin
dynamics [50.83356836818667]
We introduce a new theoretical framework to analyze deep learning optimization with connection to its generalization error.
Existing frameworks such as mean field theory and neural tangent kernel theory for neural network optimization analysis typically require taking limit of infinite width of the network to show its global convergence.
arXiv Detail & Related papers (2020-07-11T18:19:50Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.