Maximal function pooling with applications
- URL: http://arxiv.org/abs/2103.01292v1
- Date: Mon, 1 Mar 2021 20:30:04 GMT
- Title: Maximal function pooling with applications
- Authors: Wojciech Czaja, Weilin Li, Yiran Li, Mike Pekala
- Abstract summary: Maxfun pooling is inspired by the Hardy-Littlewood maximal function.
It is presented as a viable alternative to some of the most popular pooling functions, such as max pooling and average pooling.
We demonstrate the features of maxfun pooling with two applications: first in the context of convolutional sparse coding, and then for image classification.
- Score: 4.446564162927513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by the Hardy-Littlewood maximal function, we propose a novel pooling
strategy which is called maxfun pooling. It is presented both as a viable
alternative to some of the most popular pooling functions, such as max pooling
and average pooling, and as a way of interpolating between these two
algorithms. We demonstrate the features of maxfun pooling with two
applications: first in the context of convolutional sparse coding, and then for
image classification.
Related papers
- Enumeration of max-pooling responses with generalized permutohedra [39.58317527488534]
Max-pooling layers are functions that downsample input arrays by taking the maximum over shifted windows of input coordinates.
We characterize the faces of such polytopes and obtain generating functions and closed formulas for the number of vertices and facets in a 1D max-pooling layer depending on the size of the pooling windows and stride.
arXiv Detail & Related papers (2022-09-29T17:45:54Z) - The Theoretical Expressiveness of Maxpooling [4.028503203417233]
We develop a theoretical framework analyzing ReLU based approximations to max pooling.
We find that max pooling cannot be efficiently replicated using ReLU activations.
We conclude that the main cause of a difference between max pooling and an optimal approximation, can be overcome with other architectural decisions.
arXiv Detail & Related papers (2022-03-02T10:45:53Z) - AdaPool: Exponential Adaptive Pooling for Information-Retaining
Downsampling [82.08631594071656]
Pooling layers are essential building blocks of Convolutional Neural Networks (CNNs)
We propose an adaptive and exponentially weighted pooling method named adaPool.
We demonstrate how adaPool improves the preservation of detail through a range of tasks including image and video classification and object detection.
arXiv Detail & Related papers (2021-11-01T08:50:37Z) - Ordinal Pooling [26.873004843826962]
Ordinal pooling rearranges elements of a pooling region in a sequence and assigns a different weight to each element based upon its order in the sequence.
Experiments suggest that it is advantageous for the networks to perform different types of pooling operations within a pooling layer.
arXiv Detail & Related papers (2021-09-03T14:33:02Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Comparison of Methods Generalizing Max- and Average-Pooling [1.693200946453174]
Max- and average-pooling are the most popular methods for downsampling in convolutional neural networks.
In this paper, we compare different pooling methods that generalize both max- and average-pooling.
The results show that none of the more sophisticated methods perform significantly better in this classification task than standard max- or average-pooling.
arXiv Detail & Related papers (2021-03-02T14:26:51Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Efficient Pure Exploration for Combinatorial Bandits with Semi-Bandit
Feedback [51.21673420940346]
Combinatorial bandits generalize multi-armed bandits, where the agent chooses sets of arms and observes a noisy reward for each arm contained in the chosen set.
We focus on the pure-exploration problem of identifying the best arm with fixed confidence, as well as a more general setting, where the structure of the answer set differs from the one of the action set.
Based on a projection-free online learning algorithm for finite polytopes, it is the first computationally efficient algorithm which is convexally optimal and has competitive empirical performance.
arXiv Detail & Related papers (2021-01-21T10:35:09Z) - Learning Aggregation Functions [78.47770735205134]
We introduce LAF (Learning Aggregation Functions), a learnable aggregator for sets of arbitrary cardinality.
We report experiments on semi-synthetic and real data showing that LAF outperforms state-of-the-art sum- (max-) decomposition architectures.
arXiv Detail & Related papers (2020-12-15T18:28:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.