The Quest for Universal Master Key Filters in DS-CNNs
- URL: http://arxiv.org/abs/2509.11711v1
- Date: Mon, 15 Sep 2025 09:10:13 GMT
- Title: The Quest for Universal Master Key Filters in DS-CNNs
- Authors: Zahra Babaiee, Peyman M. Kiassari, Daniela Rus, Radu Grosu,
- Abstract summary: We find 8 universal filters that depthwise separable convolutional networks inherently converge to.<n>Our analysis reveals these filters are predominantly linear shifts (ax+b) of our discovered universal set.<n>Remarkably, networks with these 8 unique frozen filters achieve over 80% ImageNet accuracy.
- Score: 52.091987605762135
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: A recent study has proposed the "Master Key Filters Hypothesis" for convolutional neural network filters. This paper extends this hypothesis by radically constraining its scope to a single set of just 8 universal filters that depthwise separable convolutional networks inherently converge to. While conventional DS-CNNs employ thousands of distinct trained filters, our analysis reveals these filters are predominantly linear shifts (ax+b) of our discovered universal set. Through systematic unsupervised search, we extracted these fundamental patterns across different architectures and datasets. Remarkably, networks initialized with these 8 unique frozen filters achieve over 80% ImageNet accuracy, and even outperform models with thousands of trainable parameters when applied to smaller datasets. The identified master key filters closely match Difference of Gaussians (DoGs), Gaussians, and their derivatives, structures that are not only fundamental to classical image processing but also strikingly similar to receptive fields in mammalian visual systems. Our findings provide compelling evidence that depthwise convolutional layers naturally gravitate toward this fundamental set of spatial operators regardless of task or architecture. This work offers new insights for understanding generalization and transfer learning through the universal language of these master key filters.
Related papers
- Modelling and analysis of the 8 filters from the "master key filters hypothesis" for depthwise-separable deep networks in relation to idealized receptive fields based on scale-space theory [7.990816079551592]
We first compute spatial spread measures in terms of weighted mean values and weighted variances of the absolute values of the learned filters.<n>We then model the clustered master key filters'' in terms of difference operators applied to a spatial smoothing operation.
arXiv Detail & Related papers (2025-09-16T07:04:45Z) - The Master Key Filters Hypothesis: Deep Filters Are General [51.900488744931785]
Convolutional neural network (CNN) filters become increasingly specialized in deeper layers.<n>Recent observations of clusterable repeating patterns in depthwise separable CNNs (DS-CNNs) trained on ImageNet motivated this paper.<n>Our analysis of DS-CNNs reveals that deep filters maintain generality, contradicting the expected transition to class-specific filters.
arXiv Detail & Related papers (2024-12-21T20:04:23Z) - GrassNet: State Space Model Meets Graph Neural Network [57.62885438406724]
Graph State Space Network (GrassNet) is a novel graph neural network with theoretical support that provides a simple yet effective scheme for designing arbitrary graph spectral filters.
To the best of our knowledge, our work is the first to employ SSMs for the design of graph GNN spectral filters.
Extensive experiments on nine public benchmarks reveal that GrassNet achieves superior performance in real-world graph modeling tasks.
arXiv Detail & Related papers (2024-08-16T07:33:58Z) - Unveiling the Unseen: Identifiable Clusters in Trained Depthwise
Convolutional Kernels [56.69755544814834]
Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures.
This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers.
arXiv Detail & Related papers (2024-01-25T19:05:53Z) - Understanding the Covariance Structure of Convolutional Filters [86.0964031294896]
Recent ViT-inspired convolutional networks such as ConvMixer and ConvNeXt use large-kernel depthwise convolutions with notable structure.
We first observe that such learned filters have highly-structured covariance matrices, and we find that covariances calculated from small networks may be used to effectively initialize a variety of larger networks.
arXiv Detail & Related papers (2022-10-07T15:59:13Z) - Learning Versatile Convolution Filters for Efficient Visual Recognition [125.34595948003745]
This paper introduces versatile filters to construct efficient convolutional neural networks.
We conduct theoretical analysis on network complexity and an efficient convolution scheme is introduced.
Experimental results on benchmark datasets and neural networks demonstrate that our versatile filters are able to achieve comparable accuracy as that of original filters.
arXiv Detail & Related papers (2021-09-20T06:07:14Z) - Sparsistent filtering of comovement networks from high-dimensional data [0.0]
We introduce a new technique to filter large dimensional networks out of dynamical behavior of the constituent nodes.
As opposed to the well known network filters that rely on preserving key topological properties of the realized network, our method treats the spectrum as the fundamental object and preserves spectral properties.
arXiv Detail & Related papers (2021-01-22T15:44:41Z) - Self-grouping Convolutional Neural Networks [30.732298624941738]
We propose a novel method of designing self-grouping convolutional neural networks, called SG-CNN.
For each filter, we first evaluate the importance value of their input channels to identify the importance vectors.
Using the resulting emphdata-dependent centroids, we prune the less important connections, which implicitly minimizes the accuracy loss of the pruning.
arXiv Detail & Related papers (2020-09-29T06:24:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.