Sparse Linear Networks with a Fixed Butterfly Structure: Theory and
Practice
- URL: http://arxiv.org/abs/2007.08864v2
- Date: Sun, 4 Jul 2021 11:12:29 GMT
- Title: Sparse Linear Networks with a Fixed Butterfly Structure: Theory and
Practice
- Authors: Nir Ailon, Omer Leibovich, Vineet Nair
- Abstract summary: We propose to replace a dense linear layer in any neural network by an architecture based on the butterfly network.
In a collection of experiments, including supervised prediction on both the NLP and vision data, we show that this not only produces results that match and at times outperform existing well-known architectures.
- Score: 4.3400407844814985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A butterfly network consists of logarithmically many layers, each with a
linear number of non-zero weights (pre-specified). The fast
Johnson-Lindenstrauss transform (FJLT) can be represented as a butterfly
network followed by a projection onto a random subset of the coordinates.
Moreover, a random matrix based on FJLT with high probability approximates the
action of any matrix on a vector. Motivated by these facts, we propose to
replace a dense linear layer in any neural network by an architecture based on
the butterfly network. The proposed architecture significantly improves upon
the quadratic number of weights required in a standard dense layer to nearly
linear with little compromise in expressibility of the resulting operator. In a
collection of wide variety of experiments, including supervised prediction on
both the NLP and vision data, we show that this not only produces results that
match and at times outperform existing well-known architectures, but it also
offers faster training and prediction in deployment. To understand the
optimization problems posed by neural networks with a butterfly network, we
also study the optimization landscape of the encoder-decoder network, where the
encoder is replaced by a butterfly network followed by a dense linear layer in
smaller dimension. Theoretical result presented in the paper explains why the
training speed and outcome are not compromised by our proposed approach.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.