Pretraining a Neural Network before Knowing Its Architecture
- URL: http://arxiv.org/abs/2207.10049v1
- Date: Wed, 20 Jul 2022 17:27:50 GMT
- Title: Pretraining a Neural Network before Knowing Its Architecture
- Authors: Boris Knyazev
- Abstract summary: Training large neural networks is possible by training a smaller hypernetwork that predicts parameters for the large ones.
A recently released Graph HyperNetwork (GHN) trained this way on one million smaller ImageNet architectures is able to predict parameters for large unseen networks such as ResNet-50.
While networks with predicted parameters lose performance on the source task, the predicted parameters have been found useful for fine-tuning on other tasks.
- Score: 2.170169149901781
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training large neural networks is possible by training a smaller hypernetwork
that predicts parameters for the large ones. A recently released Graph
HyperNetwork (GHN) trained this way on one million smaller ImageNet
architectures is able to predict parameters for large unseen networks such as
ResNet-50. While networks with predicted parameters lose performance on the
source task, the predicted parameters have been found useful for fine-tuning on
other tasks. We study if fine-tuning based on the same GHN is still useful on
novel strong architectures that were published after the GHN had been trained.
We found that for recent architectures such as ConvNeXt, GHN initialization
becomes less useful than for ResNet-50. One potential reason is the increased
distribution shift of novel architectures from those used to train the GHN. We
also found that the predicted parameters lack the diversity necessary to
successfully fine-tune parameters with gradient descent. We alleviate this
limitation by applying simple post-processing techniques to predicted
parameters before fine-tuning them on a target task and improve fine-tuning of
ResNet-50 and ConvNeXt.
Related papers
- Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - LoGAH: Predicting 774-Million-Parameter Transformers using Graph HyperNetworks with 1/100 Parameters [31.55846326336193]
Graph HyperNetworks (GHNs) have recently shown strong performance in initializing large vision models.
LoGAH allows us to predict the parameters of 774-million large neural networks in a memory-efficient manner.
arXiv Detail & Related papers (2024-05-25T15:56:15Z) - GHN-Q: Parameter Prediction for Unseen Quantized Convolutional
Architectures via Graph Hypernetworks [80.29667394618625]
We conduct the first-ever study exploring the use of graph hypernetworks for predicting parameters of unseen quantized CNN architectures.
We focus on a reduced CNN search space and find that GHN-Q can in fact predict quantization-robust parameters for various 8-bit quantized CNNs.
arXiv Detail & Related papers (2022-08-26T08:00:02Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - LilNetX: Lightweight Networks with EXtreme Model Compression and
Structured Sparsification [36.651329027209634]
LilNetX is an end-to-end trainable technique for neural networks.
It enables learning models with specified accuracy-rate-computation trade-off.
arXiv Detail & Related papers (2022-04-06T17:59:10Z) - An Experimental Study of the Impact of Pre-training on the Pruning of a
Convolutional Neural Network [0.0]
In recent years, deep neural networks have known a wide success in various application domains.
Deep neural networks usually involve a large number of parameters, which correspond to the weights of the network.
The pruning methods notably attempt to reduce the size of the parameter set, by identifying and removing the irrelevant weights.
arXiv Detail & Related papers (2021-12-15T16:02:15Z) - RGP: Neural Network Pruning through Its Regular Graph Structure [6.0686251332936365]
We study the graph structure of the neural network, and propose regular graph based pruning (RGP) to perform a one-shot neural network pruning.
Experiments show that the average shortest path length of the graph is negatively correlated with the classification accuracy of the corresponding neural network.
arXiv Detail & Related papers (2021-10-28T15:08:32Z) - Parameter Prediction for Unseen Deep Architectures [23.79630072083828]
We study if we can use deep learning to directly predict parameters by exploiting the past knowledge of training other networks.
We propose a hypernetwork that can predict performant parameters in a single forward pass taking a fraction of a second, even on a CPU.
The proposed model achieves surprisingly good performance on unseen and diverse networks.
arXiv Detail & Related papers (2021-10-25T16:52:33Z) - GradInit: Learning to Initialize Neural Networks for Stable and
Efficient Training [59.160154997555956]
We present GradInit, an automated and architecture method for initializing neural networks.
It is based on a simple agnostic; the variance of each network layer is adjusted so that a single step of SGD or Adam results in the smallest possible loss value.
It also enables training the original Post-LN Transformer for machine translation without learning rate warmup.
arXiv Detail & Related papers (2021-02-16T11:45:35Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.