The Influence of Learning Rule on Representation Dynamics in Wide Neural
Networks
- URL: http://arxiv.org/abs/2210.02157v2
- Date: Thu, 25 May 2023 19:28:03 GMT
- Title: The Influence of Learning Rule on Representation Dynamics in Wide Neural
Networks
- Authors: Blake Bordelon, Cengiz Pehlevan
- Abstract summary: We analyze infinite-width deep gradient networks trained with feedback alignment (FA), direct feedback alignment (DFA), and error modulated Hebbian learning (Hebb)
We show that, for each of these learning rules, the evolution of the output function at infinite width is governed by a time varying effective neural tangent kernel (eNTK)
In the lazy training limit, this eNTK is static and does not evolve, while in the rich mean-field regime this kernel's evolution can be determined self-consistently with dynamical mean field theory (DMFT)
- Score: 18.27510863075184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is unclear how changing the learning rule of a deep neural network alters
its learning dynamics and representations. To gain insight into the
relationship between learned features, function approximation, and the learning
rule, we analyze infinite-width deep networks trained with gradient descent
(GD) and biologically-plausible alternatives including feedback alignment (FA),
direct feedback alignment (DFA), and error modulated Hebbian learning (Hebb),
as well as gated linear networks (GLN). We show that, for each of these
learning rules, the evolution of the output function at infinite width is
governed by a time varying effective neural tangent kernel (eNTK). In the lazy
training limit, this eNTK is static and does not evolve, while in the rich
mean-field regime this kernel's evolution can be determined self-consistently
with dynamical mean field theory (DMFT). This DMFT enables comparisons of the
feature and prediction dynamics induced by each of these learning rules. In the
lazy limit, we find that DFA and Hebb can only learn using the last layer
features, while full FA can utilize earlier layers with a scale determined by
the initial correlation between feedforward and feedback weight matrices. In
the rich regime, DFA and FA utilize a temporally evolving and depth-dependent
NTK. Counterintuitively, we find that FA networks trained in the rich regime
exhibit more feature learning if initialized with smaller correlation between
the forward and backward pass weights. GLNs admit a very simple formula for
their lazy limit kernel and preserve conditional Gaussianity of their
preactivations under gating functions. Error modulated Hebb rules show very
small task-relevant alignment of their kernels and perform most task relevant
learning in the last layer.
Related papers
- Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning [26.07501953088188]
We study how unbalanced layer-specific initialization variances and learning rates determine the degree of feature learning.
Our analysis reveals that they conspire to influence the learning regime through a set of conserved quantities.
We provide evidence that this unbalanced rich regime drives feature learning in deep finite-width networks, promotes interpretability of early layers in CNNs, reduces the sample complexity of learning hierarchical data, and decreases the time to grokking in modular arithmetic.
arXiv Detail & Related papers (2024-06-10T10:42:37Z) - Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean
Field Neural Networks [47.73646927060476]
We analyze the dynamics of finite width effects in wide but finite feature learning neural networks.
Our results are non-perturbative in the strength of feature learning.
arXiv Detail & Related papers (2023-04-06T23:11:49Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - On Feature Learning in Neural Networks with Global Convergence
Guarantees [49.870593940818715]
We study the optimization of wide neural networks (NNs) via gradient flow (GF)
We show that when the input dimension is no less than the size of the training set, the training loss converges to zero at a linear rate under GF.
We also show empirically that, unlike in the Neural Tangent Kernel (NTK) regime, our multi-layer model exhibits feature learning and can achieve better generalization performance than its NTK counterpart.
arXiv Detail & Related papers (2022-04-22T15:56:43Z) - Statistical Mechanics of Deep Linear Neural Networks: The
Back-Propagating Renormalization Group [4.56877715768796]
We study the statistical mechanics of learning in Deep Linear Neural Networks (DLNNs) in which the input-output function of an individual unit is linear.
We solve exactly the network properties following supervised learning using an equilibrium Gibbs distribution in the weight space.
Our numerical simulations reveal that despite the nonlinearity, the predictions of our theory are largely shared by ReLU networks with modest depth.
arXiv Detail & Related papers (2020-12-07T20:08:31Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - The Surprising Simplicity of the Early-Time Learning Dynamics of Neural
Networks [43.860358308049044]
In work, we show that these common perceptions can be completely false in the early phase of learning.
We argue that this surprising simplicity can persist in networks with more layers with convolutional architecture.
arXiv Detail & Related papers (2020-06-25T17:42:49Z) - Scalable Partial Explainability in Neural Networks via Flexible
Activation Functions [13.71739091287644]
High dimensional features and decisions given by deep neural networks (NN) require new algorithms and methods to expose its mechanisms.
Current state-of-the-art NN interpretation methods focus more on the direct relationship between NN outputs and inputs rather than the NN structure and operations itself.
In this paper, we achieve partially explainable learning model by symbolically explaining the role of activation functions (AF) under a scalable topology.
arXiv Detail & Related papers (2020-06-10T20:30:15Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.