Add a SideNet to your MainNet
- URL: http://arxiv.org/abs/2007.13512v1
- Date: Tue, 14 Jul 2020 19:25:32 GMT
- Title: Add a SideNet to your MainNet
- Authors: Adrien Morisot
- Abstract summary: We develop a method for adaptive network complexity by attaching a small classification layer, which we call SideNet, to a large pretrained network, which we call MainNet.
Given an input, the SideNet returns a classification if its confidence level, obtained via softmax, surpasses a user determined threshold, and only passes it along to the large MainNet for further processing if its confidence is too low.
Experimental results show that simple single hidden layer perceptron SideNets added onto pretrained ResNet and BERT MainNets allow for substantial decreases in compute with minimal drops in performance on image and text classification tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the performance and popularity of deep neural networks has increased, so
too has their computational cost. There are many effective techniques for
reducing a network's computational footprint (quantisation, pruning, knowledge
distillation), but these lead to models whose computational cost is the same
regardless of their input. Our human reaction times vary with the complexity of
the tasks we perform: easier tasks (e.g. telling apart dogs from boat) are
executed much faster than harder ones (e.g. telling apart two similar looking
breeds of dogs). Driven by this observation, we develop a method for adaptive
network complexity by attaching a small classification layer, which we call
SideNet, to a large pretrained network, which we call MainNet. Given an input,
the SideNet returns a classification if its confidence level, obtained via
softmax, surpasses a user determined threshold, and only passes it along to the
large MainNet for further processing if its confidence is too low. This allows
us to flexibly trade off the network's performance with its computational cost.
Experimental results show that simple single hidden layer perceptron SideNets
added onto pretrained ResNet and BERT MainNets allow for substantial decreases
in compute with minimal drops in performance on image and text classification
tasks. We also highlight three other desirable properties of our method, namely
that the classifications obtained by SideNets are calibrated, complementary to
other compute reduction techniques, and that they enable the easy exploration
of compute accuracy space.
Related papers
- A Generalization of Continuous Relaxation in Structured Pruning [0.3277163122167434]
Trends indicate that deeper and larger neural networks with an increasing number of parameters achieve higher accuracy than smaller neural networks.
We generalize structured pruning with algorithms for network augmentation, pruning, sub-network collapse and removal.
The resulting CNN executes efficiently on GPU hardware without computationally expensive sparse matrix operations.
arXiv Detail & Related papers (2023-08-28T14:19:13Z) - Deep Learning without Shortcuts: Shaping the Kernel with Tailored
Rectifiers [83.74380713308605]
We develop a new type of transformation that is fully compatible with a variant of ReLUs -- Leaky ReLUs.
We show in experiments that our method, which introduces negligible extra computational cost, validation accuracies with deep vanilla networks that are competitive with ResNets.
arXiv Detail & Related papers (2022-03-15T17:49:08Z) - Adder Neural Networks [75.54239599016535]
We present adder networks (AdderNets) to trade massive multiplications in deep neural networks.
In AdderNets, we take the $ell_p$-norm distance between filters and input feature as the output response.
We show that the proposed AdderNets can achieve 75.7% Top-1 accuracy 92.3% Top-5 accuracy using ResNet-50 on the ImageNet dataset.
arXiv Detail & Related papers (2021-05-29T04:02:51Z) - Greedy Optimization Provably Wins the Lottery: Logarithmic Number of
Winning Tickets is Enough [19.19644194006565]
We show how much we can prune a neural network given a specified tolerance of accuracy drop.
The proposed method has the guarantee that the discrepancy between the pruned network and the original network decays with exponentially fast rate.
Empirically, our method improves prior arts on pruning various network architectures including ResNet, MobilenetV2/V3 on ImageNet.
arXiv Detail & Related papers (2020-10-29T22:06:31Z) - Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets [65.28292822614418]
Giant formula for simultaneously enlarging the resolution, depth and width provides us a Rubik's cube for neural networks.
This paper aims to explore the twisting rules for obtaining deep neural networks with minimum model sizes and computational costs.
arXiv Detail & Related papers (2020-10-28T08:49:45Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Energy-efficient and Robust Cumulative Training with Net2Net
Transformation [2.4283778735260686]
We propose a cumulative training strategy that achieves training computational efficiency without incurring large accuracy loss.
We achieve this by first training a small network on a small subset of the original dataset, and then gradually expanding the network.
Experiments demonstrate that compared with training from scratch, cumulative training yields 2x reduction in computational complexity.
arXiv Detail & Related papers (2020-03-02T21:44:47Z) - AdderNet: Do We Really Need Multiplications in Deep Learning? [159.174891462064]
We present adder networks (AdderNets) to trade massive multiplications in deep neural networks for much cheaper additions to reduce computation costs.
We develop a special back-propagation approach for AdderNets by investigating the full-precision gradient.
As a result, the proposed AdderNets can achieve 74.9% Top-1 accuracy 91.7% Top-5 accuracy using ResNet-50 on the ImageNet dataset.
arXiv Detail & Related papers (2019-12-31T06:56:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.