MUXConv: Information Multiplexing in Convolutional Neural Networks
- URL: http://arxiv.org/abs/2003.13880v2
- Date: Tue, 7 Apr 2020 17:27:20 GMT
- Title: MUXConv: Information Multiplexing in Convolutional Neural Networks
- Authors: Zhichao Lu and Kalyanmoy Deb and Vishnu Naresh Boddeti
- Abstract summary: MUXConv is designed to increase the flow of information by progressively multiplexing channel and spatial information in the network.
On ImageNet, the resulting models, dubbed MUXNets, match the performance (75.3% top-1 accuracy) and multiply-add operations (218M) of MobileNetV3.
MUXNet also performs well under transfer learning and when adapted to object detection.
- Score: 25.284420772533572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks have witnessed remarkable improvements in
computational efficiency in recent years. A key driving force has been the idea
of trading-off model expressivity and efficiency through a combination of
$1\times 1$ and depth-wise separable convolutions in lieu of a standard
convolutional layer. The price of the efficiency, however, is the sub-optimal
flow of information across space and channels in the network. To overcome this
limitation, we present MUXConv, a layer that is designed to increase the flow
of information by progressively multiplexing channel and spatial information in
the network, while mitigating computational complexity. Furthermore, to
demonstrate the effectiveness of MUXConv, we integrate it within an efficient
multi-objective evolutionary algorithm to search for the optimal model
hyper-parameters while simultaneously optimizing accuracy, compactness, and
computational efficiency. On ImageNet, the resulting models, dubbed MUXNets,
match the performance (75.3% top-1 accuracy) and multiply-add operations (218M)
of MobileNetV3 while being 1.6$\times$ more compact, and outperform other
mobile models in all the three criteria. MUXNet also performs well under
transfer learning and when adapted to object detection. On the ChestX-Ray 14
benchmark, its accuracy is comparable to the state-of-the-art while being
$3.3\times$ more compact and $14\times$ more efficient. Similarly, detection on
PASCAL VOC 2007 is 1.2% more accurate, 28% faster and 6% more compact compared
to MobileNetV2. Code is available from
https://github.com/human-analysis/MUXConv
Related papers
- TransXNet: Learning Both Global and Local Dynamics with a Dual Dynamic
Token Mixer for Visual Recognition [71.6546914957701]
We propose a lightweight Dual Dynamic Token Mixer (D-Mixer) that aggregates global information and local details in an input-dependent way.
We use D-Mixer as the basic building block to design TransXNet, a novel hybrid CNN-Transformer vision backbone network.
In the ImageNet-1K image classification task, TransXNet-T surpasses Swin-T by 0.3% in top-1 accuracy while requiring less than half of the computational cost.
arXiv Detail & Related papers (2023-10-30T09:35:56Z) - EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for
Mobile Vision Applications [68.35683849098105]
We introduce split depth-wise transpose attention (SDTA) encoder that splits input tensors into multiple channel groups.
Our EdgeNeXt model with 1.3M parameters achieves 71.2% top-1 accuracy on ImageNet-1K.
Our EdgeNeXt model with 5.6M parameters achieves 79.4% top-1 accuracy on ImageNet-1K.
arXiv Detail & Related papers (2022-06-21T17:59:56Z) - DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and
Transformers [105.74546828182834]
We show a hardware-efficient dynamic inference regime, named dynamic weight slicing, which adaptively slice a part of network parameters for inputs with diverse difficulty levels.
We present dynamic slimmable network (DS-Net) and dynamic slice-able network (DS-Net++) by input-dependently adjusting filter numbers of CNNs and multiple dimensions in both CNNs and transformers.
arXiv Detail & Related papers (2021-09-21T09:57:21Z) - EfficientNetV2: Smaller Models and Faster Training [91.77432224225221]
This paper introduces EfficientNetV2, a new family of convolutional networks that have faster training speed and better parameter efficiency than previous models.
We use a combination of training-aware neural architecture search and scaling, to jointly optimize training speed and parameter efficiency.
Our experiments show that EfficientNetV2 models train much faster than state-of-the-art models while being up to 6.8x smaller.
arXiv Detail & Related papers (2021-04-01T07:08:36Z) - FastSal: a Computationally Efficient Network for Visual Saliency
Prediction [7.742198347952173]
We show that MobileNetV2 makes an excellent backbone for a visual saliency model and can be effective even without a complex decoder.
We also show that knowledge transfer from a more computationally expensive model like DeepGaze II can be achieved via pseudo-labelling an unlabelled dataset.
arXiv Detail & Related papers (2020-08-25T16:32:33Z) - Fully Dynamic Inference with Deep Neural Networks [19.833242253397206]
Two compact networks, called Layer-Net (L-Net) and Channel-Net (C-Net), predict on a per-instance basis which layers or filters/channels are redundant and therefore should be skipped.
On the CIFAR-10 dataset, LC-Net results in up to 11.9$times$ fewer floating-point operations (FLOPs) and up to 3.3% higher accuracy compared to other dynamic inference methods.
On the ImageNet dataset, LC-Net achieves up to 1.4$times$ fewer FLOPs and up to 4.6% higher Top-1 accuracy than the other methods.
arXiv Detail & Related papers (2020-07-29T23:17:48Z) - DyNet: Dynamic Convolution for Accelerating Convolutional Neural
Networks [16.169176006544436]
We propose a novel dynamic convolution method to adaptively generate convolution kernels based on image contents.
Based on the architecture MobileNetV3-Small/Large, DyNet achieves 70.3/77.1% Top-1 accuracy on ImageNet with an improvement of 2.9/1.9%.
arXiv Detail & Related papers (2020-04-22T16:58:05Z) - AANet: Adaptive Aggregation Network for Efficient Stereo Matching [33.39794232337985]
Current state-of-the-art stereo models are mostly based on costly 3D convolutions.
We propose a sparse points based intra-scale cost aggregation method to alleviate the edge-fattening issue.
We also approximate traditional cross-scale cost aggregation algorithm with neural network layers to handle large textureless regions.
arXiv Detail & Related papers (2020-04-20T18:07:55Z) - FBNetV2: Differentiable Neural Architecture Search for Spatial and
Channel Dimensions [70.59851564292828]
Differentiable Neural Architecture Search (DNAS) has demonstrated great success in designing state-of-the-art, efficient neural networks.
We propose a memory and computationally efficient DNAS variant: DMaskingNAS.
This algorithm expands the search space by up to $1014times$ over conventional DNAS.
arXiv Detail & Related papers (2020-04-12T08:52:15Z) - XSepConv: Extremely Separated Convolution [60.90871656244126]
We propose a novel extremely separated convolutional block (XSepConv)
It fuses spatially separable convolutions into depthwise convolution to reduce both the computational cost and parameter size of large kernels.
XSepConv is designed to be an efficient alternative to vanilla depthwise convolution with large kernel sizes.
arXiv Detail & Related papers (2020-02-27T11:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.