On the Shift Invariance of Max Pooling Feature Maps in Convolutional
Neural Networks
- URL: http://arxiv.org/abs/2209.11740v2
- Date: Tue, 24 Oct 2023 12:17:47 GMT
- Title: On the Shift Invariance of Max Pooling Feature Maps in Convolutional
Neural Networks
- Authors: Hubert Leterme (UGA, LJK), K\'evin Polisano (UGA, LJK), Val\'erie
Perrier (Grenoble INP, LJK), Karteek Alahari (LJK)
- Abstract summary: Subsampled convolutions with Gabor-like filters are prone to aliasing, causing sensitivity to small input shifts.
We highlight the crucial role played by the filter's frequency and orientation in achieving stability.
We experimentally validate our theory by considering a deterministic feature extractor based on the dual-tree complex wavelet packet transform.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on improving the mathematical interpretability of
convolutional neural networks (CNNs) in the context of image classification.
Specifically, we tackle the instability issue arising in their first layer,
which tends to learn parameters that closely resemble oriented band-pass
filters when trained on datasets like ImageNet. Subsampled convolutions with
such Gabor-like filters are prone to aliasing, causing sensitivity to small
input shifts. In this context, we establish conditions under which the max
pooling operator approximates a complex modulus, which is nearly shift
invariant. We then derive a measure of shift invariance for subsampled
convolutions followed by max pooling. In particular, we highlight the crucial
role played by the filter's frequency and orientation in achieving stability.
We experimentally validate our theory by considering a deterministic feature
extractor based on the dual-tree complex wavelet packet transform, a particular
case of discrete Gabor-like decomposition.
Related papers
- On the Sample Complexity of One Hidden Layer Networks with Equivariance, Locality and Weight Sharing [12.845681770287005]
Weight sharing, equivariance, and local filters, as in convolutional neural networks, are believed to contribute to the sample efficiency of neural networks.
We obtain lower and upper sample complexity bounds for a class of single hidden layer networks.
We show that the bound depends merely on the norm of filters, which is tighter than using the spectral norm of the respective matrix.
arXiv Detail & Related papers (2024-11-21T16:36:01Z) - Non Commutative Convolutional Signal Models in Neural Networks:
Stability to Small Deformations [111.27636893711055]
We study the filtering and stability properties of non commutative convolutional filters.
Our results have direct implications for group neural networks, multigraph neural networks and quaternion neural networks.
arXiv Detail & Related papers (2023-10-05T20:27:22Z) - Instabilities in Convnets for Raw Audio [1.5060156580765574]
We present a theory of large deviations for the energy response of FIR filterbanks with random Gaussian weights.
We find that deviations worsen for large filters and locally periodic input signals.
Numerical simulations align with our theory and suggest that the condition number of a convolutional layer follows a logarithmic scaling law.
arXiv Detail & Related papers (2023-09-11T22:34:06Z) - From CNNs to Shift-Invariant Twin Models Based on Complex Wavelets [7.812210699650151]
We replace the first-layer combination "real-valued convolutions + max pooling"
We claim that CMod and RMax produce comparable outputs when the convolution kernel is band-pass and oriented.
Our approach achieves superior accuracy on ImageNet and CIFAR-10 classification tasks.
arXiv Detail & Related papers (2022-12-01T09:42:55Z) - Understanding the Covariance Structure of Convolutional Filters [86.0964031294896]
Recent ViT-inspired convolutional networks such as ConvMixer and ConvNeXt use large-kernel depthwise convolutions with notable structure.
We first observe that such learned filters have highly-structured covariance matrices, and we find that covariances calculated from small networks may be used to effectively initialize a variety of larger networks.
arXiv Detail & Related papers (2022-10-07T15:59:13Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Convolutional Filtering in Simplicial Complexes [13.604803091781926]
This paper proposes convolutional filtering for data whose structure can be modeled by a simplicial complex (SC)
SCs are mathematical tools that not only capture pairwise relationships as graphs but account also for higher-order network structures.
arXiv Detail & Related papers (2022-01-29T13:13:57Z) - Fourier Series Expansion Based Filter Parametrization for Equivariant
Convolutions [73.33133942934018]
2D filter parametrization technique plays an important role when designing equivariant convolutions.
New equivariant convolution method based on the proposed filter parametrization method, named F-Conv.
F-Conv evidently outperforms previous filter parametrization based method in image super-resolution task.
arXiv Detail & Related papers (2021-07-30T10:01:52Z) - Learnable Gabor modulated complex-valued networks for orientation
robustness [4.024850952459758]
Learnable Gabor Convolutional Networks (LGCNs) are parameter-efficient and offer increased model complexity.
We investigate the robustness of complex valued convolutional weights with learned Gabor filters to enable orientation transformations.
arXiv Detail & Related papers (2020-11-23T21:22:27Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.