How Does Gradient Descent Learn Features -- A Local Analysis for Regularized Two-Layer Neural Networks
- URL: http://arxiv.org/abs/2406.01766v1
- Date: Mon, 3 Jun 2024 20:15:28 GMT
- Title: How Does Gradient Descent Learn Features -- A Local Analysis for Regularized Two-Layer Neural Networks
- Authors: Mo Zhou, Rong Ge,
- Abstract summary: The ability of learning useful features is one of the major advantages of neural networks.
Recent works show that neural network can operate in a neural tangent kernel (NTK) regime that does not allow feature learning.
- Score: 18.809547338077905
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability of learning useful features is one of the major advantages of neural networks. Although recent works show that neural network can operate in a neural tangent kernel (NTK) regime that does not allow feature learning, many works also demonstrate the potential for neural networks to go beyond NTK regime and perform feature learning. Recently, a line of work highlighted the feature learning capabilities of the early stages of gradient-based training. In this paper we consider another mechanism for feature learning via gradient descent through a local convergence analysis. We show that once the loss is below a certain threshold, gradient descent with a carefully regularized objective will capture ground-truth directions. Our results demonstrate that feature learning not only happens at the initial gradient steps, but can also occur towards the end of training.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Demystifying Lazy Training of Neural Networks from a Macroscopic Viewpoint [5.9954962391837885]
We study the gradient descent dynamics of neural networks through the lens of macroscopic limits.
Our study reveals that gradient descent can rapidly drive deep neural networks to zero training loss.
Our approach draws inspiration from the Neural Tangent Kernel (NTK) paradigm.
arXiv Detail & Related papers (2024-04-07T08:07:02Z) - Half-Space Feature Learning in Neural Networks [2.3249139042158853]
There currently exist two extreme viewpoints for neural network feature learning.
We argue neither interpretation is likely to be correct based on a novel viewpoint.
We use this alternate interpretation to motivate a model, called the Deep Linearly Gated Network (DLGN)
arXiv Detail & Related papers (2024-04-05T12:03:19Z) - Provable Guarantees for Neural Networks via Gradient Feature Learning [15.413985018920018]
This work proposes a unified analysis framework for two-layer networks trained by gradient descent.
The framework is centered around the principle of feature learning from prototypical gradients, and its effectiveness is demonstrated by applications in several problems.
arXiv Detail & Related papers (2023-10-19T01:45:37Z) - A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks [43.281323350357404]
Feature learning is thought to be one of the fundamental reasons for the success of deep neural networks.
We show that with a learning rate that grows with the sample size, such training in fact introduces multiple rank-one components.
arXiv Detail & Related papers (2023-10-11T20:55:02Z) - Is Learning in Biological Neural Networks based on Stochastic Gradient Descent? An analysis using stochastic processes [0.0]
We study a model for supervised learning in biological neural networks (BNNs)
We show that a gradient step occurs approximately when each learning opportunity is processed by many local updates.
This result suggests that gradient descent may indeed play a role in optimizing BNNs.
arXiv Detail & Related papers (2023-09-10T18:12:52Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - The Connection Between Approximation, Depth Separation and Learnability
in Neural Networks [70.55686685872008]
We study the connection between learnability and approximation capacity.
We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target.
arXiv Detail & Related papers (2021-01-31T11:32:30Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.