AutoInit: Automatic Initialization via Jacobian Tuning
- URL: http://arxiv.org/abs/2206.13568v1
- Date: Mon, 27 Jun 2022 18:14:51 GMT
- Title: AutoInit: Automatic Initialization via Jacobian Tuning
- Authors: Tianyu He, Darshil Doshi and Andrey Gromov
- Abstract summary: We introduce a new and cheap algorithm, that allows one to find a good initialization automatically, for general feed-forward DNNs.
We solve the dynamics of the algorithm for fully connected networks with ReLU and derive conditions for its convergence.
We apply our method to ResMLP and VGG architectures, where the automatic one-shot initialization found by our method shows good performance on vision tasks.
- Score: 7.9603223299524535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Good initialization is essential for training Deep Neural Networks (DNNs).
Oftentimes such initialization is found through a trial and error approach,
which has to be applied anew every time an architecture is substantially
modified, or inherited from smaller size networks leading to sub-optimal
initialization. In this work we introduce a new and cheap algorithm, that
allows one to find a good initialization automatically, for general
feed-forward DNNs. The algorithm utilizes the Jacobian between adjacent network
blocks to tune the network hyperparameters to criticality. We solve the
dynamics of the algorithm for fully connected networks with ReLU and derive
conditions for its convergence. We then extend the discussion to more general
architectures with BatchNorm and residual connections. Finally, we apply our
method to ResMLP and VGG architectures, where the automatic one-shot
initialization found by our method shows good performance on vision tasks.
Related papers
- Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - Principled Architecture-aware Scaling of Hyperparameters [69.98414153320894]
Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process.
In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture.
We demonstrate that network rankings can be easily changed by better training networks in benchmarks.
arXiv Detail & Related papers (2024-02-27T11:52:49Z) - Principles for Initialization and Architecture Selection in Graph Neural
Networks with ReLU Activations [17.51364577113718]
We show three principles for architecture selection in finite width graph neural networks (GNNs) with ReLU activations.
First, we theoretically derive what is essentially the unique generalization to ReLU GNNs of the well-known He-initialization.
Second, we prove in finite width vanilla ReLU GNNs that oversmoothing is unavoidable at large depth when using fixed aggregation operator.
arXiv Detail & Related papers (2023-06-20T16:40:41Z) - Neural Networks with Sparse Activation Induced by Large Bias: Tighter Analysis with Bias-Generalized NTK [86.45209429863858]
We study training one-hidden-layer ReLU networks in the neural tangent kernel (NTK) regime.
We show that the neural networks possess a different limiting kernel which we call textitbias-generalized NTK
We also study various properties of the neural networks with this new kernel.
arXiv Detail & Related papers (2023-01-01T02:11:39Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - A Robust Initialization of Residual Blocks for Effective ResNet Training
without Batch Normalization [0.9449650062296823]
Batch Normalization is an essential component of all state-of-the-art neural networks architectures.
We show that weights initialization is key to train ResNet-like normalization-free networks.
We show that this modified architecture achieves competitive results on CIFAR-10 without further regularization or algorithmic modifications.
arXiv Detail & Related papers (2021-12-23T01:13:15Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.