Principles for Initialization and Architecture Selection in Graph Neural
Networks with ReLU Activations
- URL: http://arxiv.org/abs/2306.11668v1
- Date: Tue, 20 Jun 2023 16:40:41 GMT
- Title: Principles for Initialization and Architecture Selection in Graph Neural
Networks with ReLU Activations
- Authors: Gage DeZoort, Boris Hanin
- Abstract summary: We show three principles for architecture selection in finite width graph neural networks (GNNs) with ReLU activations.
First, we theoretically derive what is essentially the unique generalization to ReLU GNNs of the well-known He-initialization.
Second, we prove in finite width vanilla ReLU GNNs that oversmoothing is unavoidable at large depth when using fixed aggregation operator.
- Score: 17.51364577113718
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article derives and validates three principles for initialization and
architecture selection in finite width graph neural networks (GNNs) with ReLU
activations. First, we theoretically derive what is essentially the unique
generalization to ReLU GNNs of the well-known He-initialization. Our
initialization scheme guarantees that the average scale of network outputs and
gradients remains order one at initialization. Second, we prove in finite width
vanilla ReLU GNNs that oversmoothing is unavoidable at large depth when using
fixed aggregation operator, regardless of initialization. We then prove that
using residual aggregation operators, obtained by interpolating a fixed
aggregation operator with the identity, provably alleviates oversmoothing at
initialization. Finally, we show that the common practice of using residual
connections with a fixup-type initialization provably avoids correlation
collapse in final layer features at initialization. Through ablation studies we
find that using the correct initialization, residual aggregation operators, and
residual connections in the forward pass significantly and reliably speeds up
early training dynamics in deep ReLU GNNs on a variety of tasks.
Related papers
- Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Neural Networks with Sparse Activation Induced by Large Bias: Tighter Analysis with Bias-Generalized NTK [86.45209429863858]
We study training one-hidden-layer ReLU networks in the neural tangent kernel (NTK) regime.
We show that the neural networks possess a different limiting kernel which we call textitbias-generalized NTK
We also study various properties of the neural networks with this new kernel.
arXiv Detail & Related papers (2023-01-01T02:11:39Z) - Dynamical Isometry for Residual Networks [8.21292084298669]
We show that RISOTTO achieves perfect dynamical isometry for residual networks with ReLU activation functions even for finite depth and width.
In experiments, we demonstrate that our approach outperforms schemes proposed to make Batch Normalization obsolete, including Fixup and SkipInit.
arXiv Detail & Related papers (2022-10-05T17:33:23Z) - AutoInit: Automatic Initialization via Jacobian Tuning [7.9603223299524535]
We introduce a new and cheap algorithm, that allows one to find a good initialization automatically, for general feed-forward DNNs.
We solve the dynamics of the algorithm for fully connected networks with ReLU and derive conditions for its convergence.
We apply our method to ResMLP and VGG architectures, where the automatic one-shot initialization found by our method shows good performance on vision tasks.
arXiv Detail & Related papers (2022-06-27T18:14:51Z) - On the Explicit Role of Initialization on the Convergence and Implicit
Bias of Overparametrized Linear Networks [1.0323063834827415]
We present a novel analysis of single-hidden-layer linear networks trained under gradient flow.
We show that the squared loss converges exponentially to its optimum.
We derive a novel non-asymptotic upper-bound on the distance between the trained network and the min-norm solution.
arXiv Detail & Related papers (2021-05-13T15:13:51Z) - Short-Term Memory Optimization in Recurrent Neural Networks by
Autoencoder-based Initialization [79.42778415729475]
We explore an alternative solution based on explicit memorization using linear autoencoders for sequences.
We show how such pretraining can better support solving hard classification tasks with long sequences.
We show that the proposed approach achieves a much lower reconstruction error for long sequences and a better gradient propagation during the finetuning phase.
arXiv Detail & Related papers (2020-11-05T14:57:16Z) - Fractional moment-preserving initialization schemes for training deep
neural networks [1.14219428942199]
A traditional approach to deep neural networks (DNNs) is to sample the network weights randomly for preserving the variance of pre-activations.
In this paper, we show that weights and therefore pre-activations can be modeled with a heavy-tailed distribution.
We show through numerical experiments that our schemes can improve the training and test performance.
arXiv Detail & Related papers (2020-05-25T01:10:01Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.