AdaPool: Exponential Adaptive Pooling for Information-Retaining
Downsampling
- URL: http://arxiv.org/abs/2111.00772v2
- Date: Tue, 2 Nov 2021 07:42:24 GMT
- Title: AdaPool: Exponential Adaptive Pooling for Information-Retaining
Downsampling
- Authors: Alexandros Stergiou and Ronald Poppe
- Abstract summary: Pooling layers are essential building blocks of Convolutional Neural Networks (CNNs)
We propose an adaptive and exponentially weighted pooling method named adaPool.
We demonstrate how adaPool improves the preservation of detail through a range of tasks including image and video classification and object detection.
- Score: 82.08631594071656
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pooling layers are essential building blocks of Convolutional Neural Networks
(CNNs) that reduce computational overhead and increase the receptive fields of
proceeding convolutional operations. They aim to produce downsampled volumes
that closely resemble the input volume while, ideally, also being
computationally and memory efficient. It is a challenge to meet both
requirements jointly. To this end, we propose an adaptive and exponentially
weighted pooling method named adaPool. Our proposed method uses a parameterized
fusion of two sets of pooling kernels that are based on the exponent of the
Dice-Sorensen coefficient and the exponential maximum, respectively. A key
property of adaPool is its bidirectional nature. In contrast to common pooling
methods, weights can be used to upsample a downsampled activation map. We term
this method adaUnPool. We demonstrate how adaPool improves the preservation of
detail through a range of tasks including image and video classification and
object detection. We then evaluate adaUnPool on image and video frame
super-resolution and frame interpolation tasks. For benchmarking, we introduce
Inter4K, a novel high-quality, high frame-rate video dataset. Our combined
experiments demonstrate that adaPool systematically achieves better results
across tasks and backbone architectures, while introducing a minor additional
computational and memory overhead.
Related papers
- MorphPool: Efficient Non-linear Pooling & Unpooling in CNNs [9.656707333320037]
Pooling is essentially an operation from the field of Mathematical Morphology, with max pooling as a limited special case.
In addition to pooling operations, encoder-decoder networks used for pixel-level predictions also require unpooling.
Extensive experimentation on two tasks and three large-scale datasets shows that morphological pooling and unpooling lead to improved predictive performance at much reduced parameter counts.
arXiv Detail & Related papers (2022-11-25T11:25:20Z) - Hierarchical Spherical CNNs with Lifting-based Adaptive Wavelets for
Pooling and Unpooling [101.72318949104627]
We propose a novel framework of hierarchical convolutional neural networks (HS-CNNs) with a lifting structure to learn adaptive spherical wavelets for pooling and unpooling.
LiftHS-CNN ensures a more efficient hierarchical feature learning for both image- and pixel-level tasks.
arXiv Detail & Related papers (2022-05-31T07:23:42Z) - Pooling Revisited: Your Receptive Field is Suboptimal [35.11562214480459]
The size and shape of the receptive field determine how the network aggregates local information.
We propose a simple yet effective Dynamically Optimized Pooling operation, referred to as DynOPool.
Our experiments show that the models equipped with the proposed learnable resizing module outperform the baseline networks on multiple datasets in image classification and semantic segmentation.
arXiv Detail & Related papers (2022-05-30T17:03:40Z) - Revisiting Pooling through the Lens of Optimal Transport [25.309212446782684]
We develop a novel and solid algorithmic pooling framework through the lens of optimal transport.
We make the parameters of the UOT problem learnable, and accordingly, propose a generalized pooling layer called textitUOT-Pooling for neural networks.
We test our UOT-Pooling layers in two application scenarios, including multi-instance learning (MIL) and graph embedding.
arXiv Detail & Related papers (2022-01-23T06:20:39Z) - Ordinal Pooling [26.873004843826962]
Ordinal pooling rearranges elements of a pooling region in a sequence and assigns a different weight to each element based upon its order in the sequence.
Experiments suggest that it is advantageous for the networks to perform different types of pooling operations within a pooling layer.
arXiv Detail & Related papers (2021-09-03T14:33:02Z) - Refining activation downsampling with SoftPool [74.1840492087968]
Convolutional Neural Networks (CNNs) use pooling to decrease the size of activation maps.
We propose SoftPool: a fast and efficient method for exponentially weighted activation downsampling.
We show that SoftPool can retain more information in the reduced activation maps.
arXiv Detail & Related papers (2021-01-02T12:09:49Z) - Set Based Stochastic Subsampling [85.5331107565578]
We propose a set-based two-stage end-to-end neural subsampling model that is jointly optimized with an textitarbitrary downstream task network.
We show that it outperforms the relevant baselines under low subsampling rates on a variety of tasks including image classification, image reconstruction, function reconstruction and few-shot classification.
arXiv Detail & Related papers (2020-06-25T07:36:47Z) - Strip Pooling: Rethinking Spatial Pooling for Scene Parsing [161.7521770950933]
We introduce strip pooling, which considers a long but narrow kernel, i.e., 1xN or Nx1.
We compare the performance of the proposed strip pooling and conventional spatial pooling techniques.
Both novel pooling-based designs are lightweight and can serve as an efficient plug-and-play module in existing scene parsing networks.
arXiv Detail & Related papers (2020-03-30T10:40:11Z) - Joint Parameter-and-Bandwidth Allocation for Improving the Efficiency of
Partitioned Edge Learning [73.82875010696849]
Machine learning algorithms are deployed at the network edge for training artificial intelligence (AI) models.
This paper focuses on the novel joint design of parameter (computation load) allocation and bandwidth allocation.
arXiv Detail & Related papers (2020-03-10T05:52:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.