Pooling Methods in Deep Neural Networks, a Review
- URL: http://arxiv.org/abs/2009.07485v1
- Date: Wed, 16 Sep 2020 06:11:40 GMT
- Title: Pooling Methods in Deep Neural Networks, a Review
- Authors: Hossein Gholamalinezhad and Hossein Khosravi
- Abstract summary: pooling layer is an important layer that executes the down-sampling on the feature maps coming from the previous layer.
In this paper, we reviewed some of the famous and useful pooling methods.
- Score: 6.1678491628787455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, Deep Neural Networks are among the main tools used in various
sciences. Convolutional Neural Network is a special type of DNN consisting of
several convolution layers, each followed by an activation function and a
pooling layer. The pooling layer is an important layer that executes the
down-sampling on the feature maps coming from the previous layer and produces
new feature maps with a condensed resolution. This layer drastically reduces
the spatial dimension of input. It serves two main purposes. The first is to
reduce the number of parameters or weights, thus lessening the computational
cost. The second is to control the overfitting of the network. An ideal pooling
method is expected to extract only useful information and discard irrelevant
details. There are a lot of methods for the implementation of pooling operation
in Deep Neural Networks. In this paper, we reviewed some of the famous and
useful pooling methods.
Related papers
- Half-Space Feature Learning in Neural Networks [2.3249139042158853]
There currently exist two extreme viewpoints for neural network feature learning.
We argue neither interpretation is likely to be correct based on a novel viewpoint.
We use this alternate interpretation to motivate a model, called the Deep Linearly Gated Network (DLGN)
arXiv Detail & Related papers (2024-04-05T12:03:19Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Increasing Depth of Neural Networks for Life-long Learning [2.0305676256390934]
We propose a novel method for continual learning based on the increasing depth of neural networks.
This work explores whether extending neural network depth may be beneficial in a life-long learning setting.
arXiv Detail & Related papers (2022-02-22T11:21:41Z) - Identifying Class Specific Filters with L1 Norm Frequency Histograms in
Deep CNNs [1.1278903078792917]
We analyze the final and penultimate layers of Deep Convolutional Networks.
We identify subsets of features that contribute most towards the network's decision for a class.
arXiv Detail & Related papers (2021-12-14T19:40:55Z) - Dilated Fully Convolutional Neural Network for Depth Estimation from a
Single Image [1.0131895986034314]
We present an advanced Dilated Fully Convolutional Neural Network to address the deficiencies of traditional CNNs.
Taking advantages of the exponential expansion of the receptive field in dilated convolutions, our model can minimize the loss of resolution.
We show experimentally on NYU Depth V2 datasets that the depth prediction obtained from our model is considerably closer to ground truth than that from traditional CNNs techniques.
arXiv Detail & Related papers (2021-03-12T23:19:32Z) - Refining activation downsampling with SoftPool [74.1840492087968]
Convolutional Neural Networks (CNNs) use pooling to decrease the size of activation maps.
We propose SoftPool: a fast and efficient method for exponentially weighted activation downsampling.
We show that SoftPool can retain more information in the reduced activation maps.
arXiv Detail & Related papers (2021-01-02T12:09:49Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Monocular Depth Estimation Using Multi Scale Neural Network And Feature
Fusion [0.0]
Our network uses two different blocks, first which uses different filter sizes for convolution and merges all the individual feature maps.
The second block uses dilated convolutions in place of fully connected layers thus reducing computations and increasing the receptive field.
We train and test our network on Make 3D dataset, NYU Depth V2 dataset and Kitti dataset using standard evaluation metrics for depth estimation comprised of RMSE loss and SILog loss.
arXiv Detail & Related papers (2020-09-11T18:08:52Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.