XnODR and XnIDR: Two Accurate and Fast Fully Connected Layers For
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2111.10854v3
- Date: Wed, 20 Sep 2023 01:12:51 GMT
- Title: XnODR and XnIDR: Two Accurate and Fast Fully Connected Layers For
Convolutional Neural Networks
- Authors: Jian Sun, Ali Pourramezan Fard, and Mohammad H. Mahoor
- Abstract summary: Capsule Network is powerful at defining the positional relationship between features in deep neural networks for visual recognition tasks.
The bottleneck is in the computational complexity of the Dynamic Routing mechanism used between the capsules.
XnODR and XnIDR help networks to have high accuracy with lower FLOPs and fewer parameters.
- Score: 43.85390451313721
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Capsule Network is powerful at defining the positional relationship between
features in deep neural networks for visual recognition tasks, but it is
computationally expensive and not suitable for running on mobile devices. The
bottleneck is in the computational complexity of the Dynamic Routing mechanism
used between the capsules. On the other hand, XNOR-Net is fast and
computationally efficient, though it suffers from low accuracy due to
information loss in the binarization process. To address the computational
burdens of the Dynamic Routing mechanism, this paper proposes new Fully
Connected (FC) layers by xnorizing the linear projection outside or inside the
Dynamic Routing within the CapsFC layer. Specifically, our proposed FC layers
have two versions, XnODR (Xnorize the Linear Projection Outside Dynamic
Routing) and XnIDR (Xnorize the Linear Projection Inside Dynamic Routing). To
test the generalization of both XnODR and XnIDR, we insert them into two
different networks, MobileNetV2 and ResNet-50. Our experiments on three
datasets, MNIST, CIFAR-10, and MultiMNIST validate their effectiveness. The
results demonstrate that both XnODR and XnIDR help networks to have high
accuracy with lower FLOPs and fewer parameters (e.g., 96.14% correctness with
2.99M parameters and 311.74M FLOPs on CIFAR-10).
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - LipKernel: Lipschitz-Bounded Convolutional Neural Networks via Dissipative Layers [0.0468732641979009]
We propose a layer-wise parameterization for convolutional neural networks (CNNs) that includes built-in robustness guarantees.
Our method Lip Kernel directly parameterizes dissipative convolution kernels using a 2-D Roesser-type state space model.
We show that the run-time using our method is orders of magnitude faster than state-of-the-art Lipschitz-bounded networks.
arXiv Detail & Related papers (2024-10-29T17:20:14Z) - FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions [2.7086888205833968]
Field-Programmable Gate Array (FPGA) accelerators have proven successful in handling latency- and resource-critical deep neural network (DNN) inference tasks.
We propose relaxing the boundaries of neurons and mapping entire sub-networks to a single LUT.
We validate our proposed method on a known latency-critical task, jet substructure tagging, and on the classical computer vision task, digit classification using MNIST.
arXiv Detail & Related papers (2024-02-29T16:10:21Z) - TransXNet: Learning Both Global and Local Dynamics with a Dual Dynamic
Token Mixer for Visual Recognition [71.6546914957701]
We propose a lightweight Dual Dynamic Token Mixer (D-Mixer) that aggregates global information and local details in an input-dependent way.
We use D-Mixer as the basic building block to design TransXNet, a novel hybrid CNN-Transformer vision backbone network.
In the ImageNet-1K image classification task, TransXNet-T surpasses Swin-T by 0.3% in top-1 accuracy while requiring less than half of the computational cost.
arXiv Detail & Related papers (2023-10-30T09:35:56Z) - DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and
Transformers [105.74546828182834]
We show a hardware-efficient dynamic inference regime, named dynamic weight slicing, which adaptively slice a part of network parameters for inputs with diverse difficulty levels.
We present dynamic slimmable network (DS-Net) and dynamic slice-able network (DS-Net++) by input-dependently adjusting filter numbers of CNNs and multiple dimensions in both CNNs and transformers.
arXiv Detail & Related papers (2021-09-21T09:57:21Z) - Learning Sparse & Ternary Neural Networks with Entropy-Constrained
Trained Ternarization (EC2T) [17.13246260883765]
Deep neural networks (DNNs) have shown remarkable success in a variety of machine learning applications.
In recent years, there is an increasing interest in deploying DNNs to resource-constrained devices with limited energy, memory, and computational budget.
We propose Entropy-Constrained Trained Ternarization (EC2T), a general framework to create sparse and ternary neural networks.
arXiv Detail & Related papers (2020-04-02T15:38:00Z) - DHP: Differentiable Meta Pruning via HyperNetworks [158.69345612783198]
This paper introduces a differentiable pruning method via hypernetworks for automatic network pruning.
Latent vectors control the output channels of the convolutional layers in the backbone network and act as a handle for the pruning of the layers.
Experiments are conducted on various networks for image classification, single image super-resolution, and denoising.
arXiv Detail & Related papers (2020-03-30T17:59:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.