Simultaneous Training of Partially Masked Neural Networks
- URL: http://arxiv.org/abs/2106.08895v1
- Date: Wed, 16 Jun 2021 15:57:51 GMT
- Title: Simultaneous Training of Partially Masked Neural Networks
- Authors: Amirkeivan Mohtashami, Martin Jaggi, Sebastian U. Stich
- Abstract summary: We show that it is possible to train neural networks in such a way that a predefined 'core' subnetwork can be split-off from the trained full network with remarkable good performance.
We show that training a Transformer with a low-rank core gives a low-rank model with superior performance than when training the low-rank model alone.
- Score: 67.19481956584465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For deploying deep learning models to lower end devices, it is necessary to
train less resource-demanding variants of state-of-the-art architectures. This
does not eliminate the need for more expensive models as they have a higher
performance. In order to avoid training two separate models, we show that it is
possible to train neural networks in such a way that a predefined 'core'
subnetwork can be split-off from the trained full network with remarkable good
performance. We extend on prior methods that focused only on core networks of
smaller width, while we focus on supporting arbitrary core network
architectures. Our proposed training scheme switches consecutively between
optimizing only the core part of the network and the full one. The accuracy of
the full model remains comparable, while the core network achieves better
performance than when it is trained in isolation. In particular, we show that
training a Transformer with a low-rank core gives a low-rank model with
superior performance than when training the low-rank model alone. We analyze
our training scheme theoretically, and show its convergence under assumptions
that are either standard or practically justified. Moreover, we show that the
developed theoretical framework allows analyzing many other partial training
schemes for neural networks.
Related papers
- Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - ATOM: Asynchronous Training of Massive Models for Deep Learning in a Decentralized Environment [7.916080032572087]
atom is a resilient distributed training framework designed for asynchronous training of vast models in a decentralized setting.
atom aims to accommodate a complete LLM on one host (peer) through seamlessly model swapping and concurrently trains multiple copies across various peers to optimize training throughput.
Our experiments using different GPT-3 model configurations reveal that, in scenarios with suboptimal network connections, atom can enhance training efficiency up to $20 times$ when juxtaposed with the state-of-the-art decentralized pipeline parallelism approaches.
arXiv Detail & Related papers (2024-03-15T17:43:43Z) - Subnetwork-to-go: Elastic Neural Network with Dynamic Training and
Customizable Inference [16.564868336748503]
We propose a simple way to train a large network and flexibly extract a subnetwork from it given a model size or complexity constraint.
Experiment results on a music source separation model show that our proposed method can effectively improve the separation performance across different subnetwork sizes and complexities.
arXiv Detail & Related papers (2023-12-06T12:40:06Z) - Accurate Neural Network Pruning Requires Rethinking Sparse Optimization [87.90654868505518]
We show the impact of high sparsity on model training using the standard computer vision and natural language processing sparsity benchmarks.
We provide new approaches for mitigating this issue for both sparse pre-training of vision models and sparse fine-tuning of language models.
arXiv Detail & Related papers (2023-08-03T21:49:14Z) - Slimmable Networks for Contrastive Self-supervised Learning [69.9454691873866]
Self-supervised learning makes significant progress in pre-training large models, but struggles with small models.
We introduce another one-stage solution to obtain pre-trained small models without the need for extra teachers.
A slimmable network consists of a full network and several weight-sharing sub-networks, which can be pre-trained once to obtain various networks.
arXiv Detail & Related papers (2022-09-30T15:15:05Z) - Decentralized Training of Foundation Models in Heterogeneous
Environments [77.47261769795992]
Training foundation models, such as GPT-3 and PaLM, can be extremely expensive.
We present the first study of training large foundation models with model parallelism in a decentralized regime over a heterogeneous network.
arXiv Detail & Related papers (2022-06-02T20:19:51Z) - Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural
Architecture Search [60.965024145243596]
One-shot weight sharing methods have recently drawn great attention in neural architecture search due to high efficiency and competitive performance.
To alleviate this problem, we present a simple yet effective architecture distillation method.
We introduce the concept of prioritized path, which refers to the architecture candidates exhibiting superior performance during training.
Since the prioritized paths are changed on the fly depending on their performance and complexity, the final obtained paths are the cream of the crop.
arXiv Detail & Related papers (2020-10-29T17:55:05Z) - Deep Ensembles for Low-Data Transfer Learning [21.578470914935938]
We study different ways of creating ensembles from pre-trained models.
We show that the nature of pre-training itself is a performant source of diversity.
We propose a practical algorithm that efficiently identifies a subset of pre-trained models for any downstream dataset.
arXiv Detail & Related papers (2020-10-14T07:59:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.