DIVISION: Memory Efficient Training via Dual Activation Precision
- URL: http://arxiv.org/abs/2208.04187v5
- Date: Mon, 22 May 2023 05:53:28 GMT
- Title: DIVISION: Memory Efficient Training via Dual Activation Precision
- Authors: Guanchu Wang and Zirui Liu and Zhimeng Jiang and Ninghao Liu and Na
Zou and Xia Hu
- Abstract summary: State-of-the-art work combines a search of quantization bit-width with the training, which makes the procedure complicated and less transparent.
We propose a simple and effective method to compress DNN training.
Experiment results show DIVISION has better comprehensive performance than state-of-the-art methods, including over 10x compression of activation maps and competitive training throughput, without loss of model accuracy.
- Score: 60.153754740511864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Activation compressed training provides a solution towards reducing the
memory cost of training deep neural networks~(DNNs). However, state-of-the-art
work combines a search of quantization bit-width with the training, which makes
the procedure complicated and less transparent. To this end, we propose a
simple and effective method to compress DNN training. Our method is motivated
by an instructive observation: DNN backward propagation mainly utilizes the
low-frequency component (LFC) of the activation maps, while the majority of
memory is for caching the high-frequency component (HFC) during the training.
This indicates the HFC of activation maps is highly redundant and compressible
during DNN training, which inspires our proposed Dual Activation Precision
(DIVISION). During the training, DIVISION preserves the high-precision copy of
LFC and compresses the HFC into a light-weight copy with low numerical
precision. This can significantly reduce the memory cost without negatively
affecting the precision of backward propagation such that DIVISION maintains
competitive model accuracy. Experiment results show DIVISION has better
comprehensive performance than state-of-the-art methods, including over 10x
compression of activation maps and competitive training throughput, without
loss of model accuracy.
Related papers
- Towards Memory- and Time-Efficient Backpropagation for Training Spiking
Neural Networks [70.75043144299168]
Spiking Neural Networks (SNNs) are promising energy-efficient models for neuromorphic computing.
We propose the Spatial Learning Through Time (SLTT) method that can achieve high performance while greatly improving training efficiency.
Our method achieves state-of-the-art accuracy on ImageNet, while the memory cost and training time are reduced by more than 70% and 50%, respectively, compared with BPTT.
arXiv Detail & Related papers (2023-02-28T05:01:01Z) - A Low-Complexity Approach to Rate-Distortion Optimized Variable Bit-Rate
Compression for Split DNN Computing [5.3221129103999125]
Split computing has emerged as a recent paradigm for implementation of DNN-based AI workloads.
We present an approach that addresses the challenge of optimizing the rate-accuracy-complexity trade-off.
Our approach is remarkably lightweight, both during training and inference, highly effective and achieves excellent rate-distortion performance.
arXiv Detail & Related papers (2022-08-24T15:02:11Z) - Online Convolutional Re-parameterization [51.97831675242173]
We present online convolutional re- parameterization (OREPA), a two-stage pipeline, aiming to reduce the huge training overhead by squeezing the complex training-time block into a single convolution.
Compared with the state-of-the-art re-param models, OREPA is able to save the training-time memory cost by about 70% and accelerate the training speed by around 2x.
We also conduct experiments on object detection and semantic segmentation and show consistent improvements on the downstream tasks.
arXiv Detail & Related papers (2022-04-02T09:50:19Z) - Mesa: A Memory-saving Training Framework for Transformers [58.78933015299703]
We present Mesa, a memory-saving training framework for Transformers.
Mesa uses exact activations during forward pass while storing a low-precision version of activations to reduce memory consumption during training.
Experiments on ImageNet, CIFAR-100 and ADE20K demonstrate that Mesa can reduce half of the memory footprints during training.
arXiv Detail & Related papers (2021-11-22T11:23:01Z) - COMET: A Novel Memory-Efficient Deep Learning Training Framework by
Using Error-Bounded Lossy Compression [8.080129426746288]
Training wide and deep neural networks (DNNs) require large amounts of storage resources such as memory.
We propose a memory-efficient CNN training framework (called COMET) that leverages error-bounded lossy compression.
Our framework can significantly reduce the training memory consumption by up to 13.5X over the baseline training and 1.8X over another state-of-the-art compression-based framework.
arXiv Detail & Related papers (2021-11-18T07:43:45Z) - ActNN: Reducing Training Memory Footprint via 2-Bit Activation
Compressed Training [68.63354877166756]
ActNN is a memory-efficient training framework that stores randomly quantized activations for back propagation.
ActNN reduces the memory footprint of the activation by 12x, and it enables training with a 6.6x to 14x larger batch size.
arXiv Detail & Related papers (2021-04-29T05:50:54Z) - Improving Computational Efficiency in Visual Reinforcement Learning via
Stored Embeddings [89.63764845984076]
We present Stored Embeddings for Efficient Reinforcement Learning (SEER)
SEER is a simple modification of existing off-policy deep reinforcement learning methods.
We show that SEER does not degrade the performance of RLizable agents while significantly saving computation and memory.
arXiv Detail & Related papers (2021-03-04T08:14:10Z) - A Novel Memory-Efficient Deep Learning Training Framework via
Error-Bounded Lossy Compression [6.069852296107781]
We propose a memory-driven high performance DNN training framework that leverages error-bounded lossy compression.
Our framework can significantly reduce the training memory consumption by up to 13.5x and 1.8x over the baseline training and state-of-the-art framework with compression.
arXiv Detail & Related papers (2020-11-18T00:47:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.