Cuttlefish: Low-Rank Model Training without All the Tuning
- URL: http://arxiv.org/abs/2305.02538v2
- Date: Fri, 5 May 2023 16:18:28 GMT
- Title: Cuttlefish: Low-Rank Model Training without All the Tuning
- Authors: Hongyi Wang, Saurabh Agarwal, Pongsakorn U-chupala, Yoshiki Tanaka,
Eric P. Xing, Dimitris Papailiopoulos
- Abstract summary: We introduce Cuttlefish, an automated low-rank training approach.
Cuttlefish switches from full-rank to low-rank training once the stable ranks of all layers have converged.
Our results show that Cuttlefish generates models up to 5.6 times smaller than full-rank models, and attains up to a 1.2 times faster end-to-end training process.
- Score: 55.984294012024755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research has shown that training low-rank neural networks can
effectively reduce the total number of trainable parameters without sacrificing
predictive accuracy, resulting in end-to-end speedups. However, low-rank model
training necessitates adjusting several additional factorization
hyperparameters, such as the rank of the factorization at each layer. In this
paper, we tackle this challenge by introducing Cuttlefish, an automated
low-rank training approach that eliminates the need for tuning factorization
hyperparameters. Cuttlefish leverages the observation that after a few epochs
of full-rank training, the stable rank (i.e., an approximation of the true
rank) of each layer stabilizes at a constant value. Cuttlefish switches from
full-rank to low-rank training once the stable ranks of all layers have
converged, setting the dimension of each factorization to its corresponding
stable rank. Our results show that Cuttlefish generates models up to 5.6 times
smaller than full-rank models, and attains up to a 1.2 times faster end-to-end
training process while preserving comparable accuracy. Moreover, Cuttlefish
outperforms state-of-the-art low-rank model training methods and other
prominent baselines. The source code for our implementation can be found at:
https://github.com/hwang595/Cuttlefish.
Related papers
- AdaRankGrad: Adaptive Gradient-Rank and Moments for Memory-Efficient LLMs Training and Fine-Tuning [9.51289606759621]
Training and fine-tuning large language models (LLMs) come with challenges related to memory and computational requirements.
Various techniques have been developed to tackle these challenges, such as low-rank adaptation (LoRA)
We introduce a new method inspired by a phenomenon we formally prove: as training progresses, the rank of the estimated gradient gradually decreases.
arXiv Detail & Related papers (2024-10-23T13:53:26Z) - Full-Rank No More: Low-Rank Weight Training for Modern Speech Recognition Models [46.87216968390808]
This paper investigates the under-explored area of low-rank weight training for large-scale Conformer-based speech recognition models from scratch.
Applying a low-rank structure exclusively to the attention modules can unexpectedly enhance performance.
Feed-forward layers present greater challenges, as they begin to exhibit performance degradation with a moderate 50% rank reduction.
arXiv Detail & Related papers (2024-10-10T09:58:35Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Training Acceleration of Low-Rank Decomposed Networks using Sequential
Freezing and Rank Quantization [5.914653351242832]
We propose two techniques for accelerating low rank decomposed models without requiring to use small ranks for decomposition.
These methods include rank optimization and sequential freezing of layers.
Experiments show that these techniques can improve the model throughput up to 60% during training and 37% during inference when combined together.
arXiv Detail & Related papers (2023-09-07T16:33:42Z) - InRank: Incremental Low-Rank Learning [85.6380047359139]
gradient-based training implicitly regularizes neural networks towards low-rank solutions through a gradual increase of the rank during training.
Existing training algorithms do not exploit the low-rank property to improve computational efficiency.
We design a new training algorithm Incremental Low-Rank Learning (InRank), which explicitly expresses cumulative weight updates as low-rank matrices.
arXiv Detail & Related papers (2023-06-20T03:03:04Z) - Slimmable Networks for Contrastive Self-supervised Learning [69.9454691873866]
Self-supervised learning makes significant progress in pre-training large models, but struggles with small models.
We introduce another one-stage solution to obtain pre-trained small models without the need for extra teachers.
A slimmable network consists of a full network and several weight-sharing sub-networks, which can be pre-trained once to obtain various networks.
arXiv Detail & Related papers (2022-09-30T15:15:05Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Pufferfish: Communication-efficient Models At No Extra Cost [7.408148824204065]
Pufferfish is a communication and efficient distributed training framework.
It incorporates the gradient compression into the model training process via training low-rank, pre-factorized deep networks.
It achieves the same accuracy as state-of-the-art, off-the-shelf deep models.
arXiv Detail & Related papers (2021-03-05T20:46:39Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.