MorphPool: Efficient Non-linear Pooling & Unpooling in CNNs
- URL: http://arxiv.org/abs/2211.14037v1
- Date: Fri, 25 Nov 2022 11:25:20 GMT
- Title: MorphPool: Efficient Non-linear Pooling & Unpooling in CNNs
- Authors: Rick Groenendijk, Leo Dorst, and Theo Gevers
- Abstract summary: Pooling is essentially an operation from the field of Mathematical Morphology, with max pooling as a limited special case.
In addition to pooling operations, encoder-decoder networks used for pixel-level predictions also require unpooling.
Extensive experimentation on two tasks and three large-scale datasets shows that morphological pooling and unpooling lead to improved predictive performance at much reduced parameter counts.
- Score: 9.656707333320037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pooling is essentially an operation from the field of Mathematical
Morphology, with max pooling as a limited special case. The more general
setting of MorphPooling greatly extends the tool set for building neural
networks. In addition to pooling operations, encoder-decoder networks used for
pixel-level predictions also require unpooling. It is common to combine
unpooling with convolution or deconvolution for up-sampling. However, using its
morphological properties, unpooling can be generalised and improved. Extensive
experimentation on two tasks and three large-scale datasets shows that
morphological pooling and unpooling lead to improved predictive performance at
much reduced parameter counts.
Related papers
- Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Hierarchical Spherical CNNs with Lifting-based Adaptive Wavelets for
Pooling and Unpooling [101.72318949104627]
We propose a novel framework of hierarchical convolutional neural networks (HS-CNNs) with a lifting structure to learn adaptive spherical wavelets for pooling and unpooling.
LiftHS-CNN ensures a more efficient hierarchical feature learning for both image- and pixel-level tasks.
arXiv Detail & Related papers (2022-05-31T07:23:42Z) - Pooling Revisited: Your Receptive Field is Suboptimal [35.11562214480459]
The size and shape of the receptive field determine how the network aggregates local information.
We propose a simple yet effective Dynamically Optimized Pooling operation, referred to as DynOPool.
Our experiments show that the models equipped with the proposed learnable resizing module outperform the baseline networks on multiple datasets in image classification and semantic segmentation.
arXiv Detail & Related papers (2022-05-30T17:03:40Z) - AdaPool: Exponential Adaptive Pooling for Information-Retaining
Downsampling [82.08631594071656]
Pooling layers are essential building blocks of Convolutional Neural Networks (CNNs)
We propose an adaptive and exponentially weighted pooling method named adaPool.
We demonstrate how adaPool improves the preservation of detail through a range of tasks including image and video classification and object detection.
arXiv Detail & Related papers (2021-11-01T08:50:37Z) - Ordinal Pooling [26.873004843826962]
Ordinal pooling rearranges elements of a pooling region in a sequence and assigns a different weight to each element based upon its order in the sequence.
Experiments suggest that it is advantageous for the networks to perform different types of pooling operations within a pooling layer.
arXiv Detail & Related papers (2021-09-03T14:33:02Z) - Refining activation downsampling with SoftPool [74.1840492087968]
Convolutional Neural Networks (CNNs) use pooling to decrease the size of activation maps.
We propose SoftPool: a fast and efficient method for exponentially weighted activation downsampling.
We show that SoftPool can retain more information in the reduced activation maps.
arXiv Detail & Related papers (2021-01-02T12:09:49Z) - Implicit Convex Regularizers of CNN Architectures: Convex Optimization
of Two- and Three-Layer Networks in Polynomial Time [70.15611146583068]
We study training of Convolutional Neural Networks (CNNs) with ReLU activations.
We introduce exact convex optimization with a complexity with respect to the number of data samples, the number of neurons, and data dimension.
arXiv Detail & Related papers (2020-06-26T04:47:20Z) - Multi Layer Neural Networks as Replacement for Pooling Operations [13.481518628796692]
We show that one perceptron can already be used effectively as a pooling operation without increasing the complexity of the model.
We compare our approach to tensor convolution with strides as a pooling operation and show that our approach is both effective and reduces complexity.
arXiv Detail & Related papers (2020-06-12T07:08:38Z) - Strip Pooling: Rethinking Spatial Pooling for Scene Parsing [161.7521770950933]
We introduce strip pooling, which considers a long but narrow kernel, i.e., 1xN or Nx1.
We compare the performance of the proposed strip pooling and conventional spatial pooling techniques.
Both novel pooling-based designs are lightweight and can serve as an efficient plug-and-play module in existing scene parsing networks.
arXiv Detail & Related papers (2020-03-30T10:40:11Z) - RNNPool: Efficient Non-linear Pooling for RAM Constrained Inference [24.351577383531616]
We introduce RNNPool, a novel pooling operator based on Recurrent Neural Networks (RNNs)
An RNNPool layer can effectively replace multiple blocks in a variety of architectures like MobileNets, DenseNet when applied to standard vision tasks like image classification and face detection.
We use RNNPool with the standard S3FD architecture to construct a face detection method that achieves state-of-the-art MAP for tiny ARM Cortex-M4 class microcontrollers with under 256 KB of RAM.
arXiv Detail & Related papers (2020-02-27T05:22:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.