ProgressiveSpinalNet architecture for FC layers
- URL: http://arxiv.org/abs/2103.11373v1
- Date: Sun, 21 Mar 2021 11:54:50 GMT
- Title: ProgressiveSpinalNet architecture for FC layers
- Authors: Praveen Chopra
- Abstract summary: In deeplearning models the FC layer has biggest important role for classification of the input based on the learned features from previous layers.
This paper aims to reduce these large numbers of parameters significantly with improved performance.
The motivation is inspired from SpinalNet and other biological architecture.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In deeplearning models the FC (fully connected) layer has biggest important
role for classification of the input based on the learned features from
previous layers. The FC layers has highest numbers of parameters and
fine-tuning these large numbers of parameters, consumes most of the
computational resources, so in this paper it is aimed to reduce these large
numbers of parameters significantly with improved performance. The motivation
is inspired from SpinalNet and other biological architecture. The proposed
architecture has a gradient highway between input to output layers and this
solves the problem of diminishing gradient in deep networks. In this all the
layers receives the input from previous layers as well as the CNN layer output
and this way all layers contribute in decision making with last layer. This
approach has improved classification performance over the SpinalNet
architecture and has SOTA performance on many datasets such as Caltech101,
KMNIST, QMNIST and EMNIST. The source code is available at
https://github.com/praveenchopra/ProgressiveSpinalNet.
Related papers
- Dynamic Perceiver for Efficient Visual Recognition [87.08210214417309]
We propose Dynamic Perceiver (Dyn-Perceiver) to decouple the feature extraction procedure and the early classification task.
A feature branch serves to extract image features, while a classification branch processes a latent code assigned for classification tasks.
Early exits are placed exclusively within the classification branch, thus eliminating the need for linear separability in low-level features.
arXiv Detail & Related papers (2023-06-20T03:00:22Z) - WLD-Reg: A Data-dependent Within-layer Diversity Regularizer [98.78384185493624]
Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization.
We propose to complement this traditional 'between-layer' feedback with additional 'within-layer' feedback to encourage the diversity of the activations within the same layer.
We present an extensive empirical study confirming that the proposed approach enhances the performance of several state-of-the-art neural network models in multiple tasks.
arXiv Detail & Related papers (2023-01-03T20:57:22Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - Towards Disentangling Information Paths with Coded ResNeXt [11.884259630414515]
We take a novel approach to enhance the transparency of the function of the whole network.
We propose a neural network architecture for classification, in which the information that is relevant to each class flows through specific paths.
arXiv Detail & Related papers (2022-02-10T21:45:49Z) - Effectiveness of Deep Networks in NLP using BiDAF as an example
architecture [0.0]
I explore the effectiveness of deep networks focussing on the model encoder layer of BiDAF.
I believe the next greatest model in NLP will in fact fold in a solid language modeling like BERT with a composite architecture.
arXiv Detail & Related papers (2021-08-31T20:50:18Z) - Wise-SrNet: A Novel Architecture for Enhancing Image Classification by
Learning Spatial Resolution of Feature Maps [0.5892638927736115]
One of the main challenges since the advancement of convolutional neural networks is how to connect the extracted feature map to the final classification layer.
In this paper, we aim to tackle this problem by replacing the GAP layer with a new architecture called Wise-SrNet.
It is inspired by the depthwise convolutional idea and is designed for processing spatial resolution while not increasing computational cost.
arXiv Detail & Related papers (2021-04-26T00:37:11Z) - GradInit: Learning to Initialize Neural Networks for Stable and
Efficient Training [59.160154997555956]
We present GradInit, an automated and architecture method for initializing neural networks.
It is based on a simple agnostic; the variance of each network layer is adjusted so that a single step of SGD or Adam results in the smallest possible loss value.
It also enables training the original Post-LN Transformer for machine translation without learning rate warmup.
arXiv Detail & Related papers (2021-02-16T11:45:35Z) - Train your classifier first: Cascade Neural Networks Training from upper
layers to lower layers [54.47911829539919]
We develop a novel top-down training method which can be viewed as an algorithm for searching for high-quality classifiers.
We tested this method on automatic speech recognition (ASR) tasks and language modelling tasks.
The proposed method consistently improves recurrent neural network ASR models on Wall Street Journal, self-attention ASR models on Switchboard, and AWD-LSTM language models on WikiText-2.
arXiv Detail & Related papers (2021-02-09T08:19:49Z) - Do We Need Fully Connected Output Layers in Convolutional Networks? [40.84294968326573]
We show that the typical approach of having a fully connected final output layer is inefficient in terms of parameter count.
We are able to achieve comparable performance to a traditionally learned fully connected classification output layer on the ImageNet-1K, CIFAR-100, Stanford Cars-196, and Oxford Flowers-102 datasets.
arXiv Detail & Related papers (2020-04-28T15:21:44Z) - Convolutional Networks with Dense Connectivity [59.30634544498946]
We introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion.
For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers.
We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks.
arXiv Detail & Related papers (2020-01-08T06:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.