Magnitude and Angle Dynamics in Training Single ReLU Neurons
- URL: http://arxiv.org/abs/2209.13394v1
- Date: Tue, 27 Sep 2022 13:58:46 GMT
- Title: Magnitude and Angle Dynamics in Training Single ReLU Neurons
- Authors: Sangmin Lee, Byeongsu Sim, Jong Chul Ye
- Abstract summary: We decompose gradient flow $w(t)$ to magnitude $w(t)$ and angle $phi(t):= pi - theta(t) $ components.
We find that small scale initialization induces slow convergence speed for deep single ReLU neurons.
- Score: 45.886537625951256
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: To understand learning the dynamics of deep ReLU networks, we investigate the
dynamic system of gradient flow $w(t)$ by decomposing it to magnitude $w(t)$
and angle $\phi(t):= \pi - \theta(t) $ components. In particular, for
multi-layer single ReLU neurons with spherically symmetric data distribution
and the square loss function, we provide upper and lower bounds for magnitude
and angle components to describe the dynamics of gradient flow. Using the
obtained bounds, we conclude that small scale initialization induces slow
convergence speed for deep single ReLU neurons. Finally, by exploiting the
relation of gradient flow and gradient descent, we extend our results to the
gradient descent approach. All theoretical results are verified by experiments.
Related papers
- A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Leveraging Continuous Time to Understand Momentum When Training Diagonal
Linear Networks [21.176224458126285]
We use a continuous-time approach in the analysis of momentum gradient descent with step size $gamma$ and momentum parameter $beta$.
We prove that small values of $lambda$ help to recover sparse solutions.
arXiv Detail & Related papers (2024-03-08T13:21:07Z) - Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data [63.34506218832164]
In this work, we investigate the implicit bias of gradient flow and gradient descent in two-layer fully-connected neural networks with ReLU activations.
For gradient flow, we leverage recent work on the implicit bias for homogeneous neural networks to show that leakyally, gradient flow produces a neural network with rank at most two.
For gradient descent, provided the random variance is small enough, we show that a single step of gradient descent suffices to drastically reduce the rank of the network, and that the rank remains small throughout training.
arXiv Detail & Related papers (2022-10-13T15:09:54Z) - Gradient flow dynamics of shallow ReLU networks for square loss and
orthogonal inputs [19.401271427657395]
The training of neural networks by gradient descent methods is a cornerstone of the deep learning revolution.
This article presents the gradient flow dynamics of one neural network for the mean squared error at small initialisation.
arXiv Detail & Related papers (2022-06-02T09:01:25Z) - Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks [83.58049517083138]
We consider a two-layer ReLU network trained via gradient descent.
We show that SGD is biased towards a simple solution.
We also provide empirical evidence that knots at locations distinct from the data points might occur.
arXiv Detail & Related papers (2021-11-03T15:14:20Z) - Continuous vs. Discrete Optimization of Deep Neural Networks [15.508460240818575]
We show that over deep neural networks with homogeneous activations, gradient flow trajectories enjoy favorable curvature.
This finding allows us to translate an analysis of gradient flow over deep linear neural networks into a guarantee that gradient descent efficiently converges to global minimum.
We hypothesize that the theory of gradient flows will be central to unraveling mysteries behind deep learning.
arXiv Detail & Related papers (2021-07-14T10:59:57Z) - Channel-Directed Gradients for Optimization of Convolutional Neural
Networks [50.34913837546743]
We introduce optimization methods for convolutional neural networks that can be used to improve existing gradient-based optimization in terms of generalization error.
We show that defining the gradients along the output channel direction leads to a performance boost, while other directions can be detrimental.
arXiv Detail & Related papers (2020-08-25T00:44:09Z) - The Impact of the Mini-batch Size on the Variance of Gradients in
Stochastic Gradient Descent [28.148743710421932]
The mini-batch gradient descent (SGD) algorithm is widely used in training machine learning models.
We study SGD dynamics under linear regression and two-layer linear networks, with an easy extension to deeper linear networks.
arXiv Detail & Related papers (2020-04-27T20:06:11Z) - On the Convex Behavior of Deep Neural Networks in Relation to the
Layers' Width [99.24399270311069]
We observe that for wider networks, minimizing the loss with the descent optimization maneuvers through surfaces of positive curvatures at the start and end of training, and close to zero curvatures in between.
In other words, it seems that during crucial parts of the training process, the Hessian in wide networks is dominated by the component G.
arXiv Detail & Related papers (2020-01-14T16:30:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.