Reducing the Computational Cost of Deep Generative Models with Binary
Neural Networks
- URL: http://arxiv.org/abs/2010.13476v2
- Date: Mon, 3 May 2021 19:33:38 GMT
- Title: Reducing the Computational Cost of Deep Generative Models with Binary
Neural Networks
- Authors: Thomas Bird, Friso H. Kingma, David Barber
- Abstract summary: We show for the first time that we can successfully train generative models which utilize binary neural networks.
This reduces the computational cost of the models massively.
We demonstrate that two state-of-the-art deep generative models, the ResNet VAE and Flow++ models, can be binarized effectively using these techniques.
- Score: 25.084146613277973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep generative models provide a powerful set of tools to understand
real-world data. But as these models improve, they increase in size and
complexity, so their computational cost in memory and execution time grows.
Using binary weights in neural networks is one method which has shown promise
in reducing this cost. However, whether binary neural networks can be used in
generative models is an open problem. In this work we show, for the first time,
that we can successfully train generative models which utilize binary neural
networks. This reduces the computational cost of the models massively. We
develop a new class of binary weight normalization, and provide insights for
architecture designs of these binarized generative models. We demonstrate that
two state-of-the-art deep generative models, the ResNet VAE and Flow++ models,
can be binarized effectively using these techniques. We train binary models
that achieve loss values close to those of the regular models but are 90%-94%
smaller in size, and also allow significant speed-ups in execution time.
Related papers
- DeepWeightFlow: Re-Basined Flow Matching for Generating Neural Network Weights [10.97849774373198]
We present DeepWeightFlow, a Flow Matching model that operates directly in weight space to generate diverse and high-accuracy neural network weights.<n>The neural networks generated by DeepWeightFlow do not require fine-tuning to perform well and can scale to large networks.
arXiv Detail & Related papers (2026-01-08T15:56:28Z) - Generative Modeling of Weights: Generalization or Memorization? [5.365909921563036]
Generative models have been explored for effective neural network weights.<n>In this work, we examine four methods on their ability to generate novel model weights.<n>We find that these methods synthesize weights largely by memorization.
arXiv Detail & Related papers (2025-06-09T17:58:36Z) - A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - Dr$^2$Net: Dynamic Reversible Dual-Residual Networks for Memory-Efficient Finetuning [81.0108753452546]
We propose Dynamic Reversible Dual-Residual Networks, or Dr$2$Net, to finetune a pretrained model with substantially reduced memory consumption.
Dr$2$Net contains two types of residual connections, one maintaining the residual structure in the pretrained models, and the other making the network reversible.
We show that Dr$2$Net can reach comparable performance to conventional finetuning but with significantly less memory usage.
arXiv Detail & Related papers (2024-01-08T18:59:31Z) - Optimizing Dense Feed-Forward Neural Networks [0.0]
We propose a novel feed-forward neural network constructing method based on pruning and transfer learning.
Our approach can compress the number of parameters by more than 70%.
We also evaluate the transfer learning level comparing the refined model and the original one training from scratch a neural network.
arXiv Detail & Related papers (2023-12-16T23:23:16Z) - NAR-Former: Neural Architecture Representation Learning towards Holistic
Attributes Prediction [37.357949900603295]
We propose a neural architecture representation model that can be used to estimate attributes holistically.
Experiment results show that our proposed framework can be used to predict the latency and accuracy attributes of both cell architectures and whole deep neural networks.
arXiv Detail & Related papers (2022-11-15T10:15:21Z) - Dimensionality Reduction in Deep Learning via Kronecker Multi-layer
Architectures [4.836352379142503]
We propose a new deep learning architecture based on fast matrix multiplication of a Kronecker product decomposition.
We show that this architecture allows a neural network to be trained and implemented with a significant reduction in computational time and resources.
arXiv Detail & Related papers (2022-04-08T19:54:52Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Training Deep Neural Networks with Constrained Learning Parameters [4.917317902787792]
A significant portion of deep learning tasks would run on edge computing systems.
We propose the Combinatorial Neural Network Training Algorithm (CoNNTrA)
CoNNTrA trains deep learning models with ternary learning parameters on the MNIST, Iris and ImageNet data sets.
Our results indicate that CoNNTrA models use 32x less memory and have errors at par with the Backpropagation models.
arXiv Detail & Related papers (2020-09-01T16:20:11Z) - Binarizing MobileNet via Evolution-based Searching [66.94247681870125]
We propose a use of evolutionary search to facilitate the construction and training scheme when binarizing MobileNet.
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs)
Our objective is to come up with a tiny yet efficient binary neural architecture by exploring the best candidates of the group convolution.
arXiv Detail & Related papers (2020-05-13T13:25:51Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.