Enhancing Generalization in Convolutional Neural Networks through Regularization with Edge and Line Features
- URL: http://arxiv.org/abs/2410.16897v1
- Date: Tue, 22 Oct 2024 11:02:32 GMT
- Title: Enhancing Generalization in Convolutional Neural Networks through Regularization with Edge and Line Features
- Authors: Christoph Linse, Beatrice Brückner, Thomas Martinetz,
- Abstract summary: This paper proposes a novel regularization approach to bias Convolutional Neural Networks (CNNs)
Rather than learning arbitrary kernels, we constrain the convolution layers to edge and line detection kernels.
Test accuracies improve by margins of 5-11 percentage points across four challenging fine-grained classification datasets.
- Score: 0.0
- License:
- Abstract: This paper proposes a novel regularization approach to bias Convolutional Neural Networks (CNNs) toward utilizing edge and line features in their hidden layers. Rather than learning arbitrary kernels, we constrain the convolution layers to edge and line detection kernels. This intentional bias regularizes the models, improving generalization performance, especially on small datasets. As a result, test accuracies improve by margins of 5-11 percentage points across four challenging fine-grained classification datasets with limited training data and an identical number of trainable parameters. Instead of traditional convolutional layers, we use Pre-defined Filter Modules, which convolve input data using a fixed set of 3x3 pre-defined edge and line filters. A subsequent ReLU erases information that did not trigger any positive response. Next, a 1x1 convolutional layer generates linear combinations. Notably, the pre-defined filters are a fixed component of the architecture, remaining unchanged during the training phase. Our findings reveal that the number of dimensions spanned by the set of pre-defined filters has a low impact on recognition performance. However, the size of the set of filters matters, with nine or more filters providing optimal results.
Related papers
- Filter Pruning for Efficient CNNs via Knowledge-driven Differential
Filter Sampler [103.97487121678276]
Filter pruning simultaneously accelerates the computation and reduces the memory overhead of CNNs.
We propose a novel Knowledge-driven Differential Filter Sampler(KDFS) with Masked Filter Modeling(MFM) framework for filter pruning.
arXiv Detail & Related papers (2023-07-01T02:28:41Z) - The Power of Linear Combinations: Learning with Random Convolutions [2.0305676256390934]
Modern CNNs can achieve high test accuracies without ever updating randomly (spatial) convolution filters.
These combinations of random filters can implicitly regularize the resulting operations.
Although we only observe relatively small gains from learning $3times 3$ convolutions, the learning gains increase proportionally with kernel size.
arXiv Detail & Related papers (2023-01-26T19:17:10Z) - Perturb Initial Features: Generalization of Neural Networks Under Sparse
Features for Semi-supervised Node Classification [1.3190581566723918]
We propose a novel data augmentation strategy for graph neural networks (GNNs)
By flipping both the initial features and hyperplane, we create additional space for training, which leads to more precise updates of the learnable parameters.
Experiments on real-world datasets show that our proposed technique increases node classification accuracy by up to 46.5% relatively.
arXiv Detail & Related papers (2022-11-28T05:54:24Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Resolution learning in deep convolutional networks using scale-space
theory [31.275270391367425]
Resolution in deep convolutional neural networks (CNNs) is typically bounded by the receptive field size through filter sizes, and subsampling layers or strided convolutions on feature maps.
We propose to do away with hard-coded resolution hyper- parameters and aim to learn the appropriate resolution from data.
We use scale-space theory to obtain a self-similar parametrization of filters and make use of the N-Jet: a truncated Taylor series to approximate a filter by a learned combination of Gaussian derivative filters.
arXiv Detail & Related papers (2021-06-07T08:23:02Z) - Unsharp Mask Guided Filtering [53.14430987860308]
The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering.
We propose a new and simplified formulation of the guided filter inspired by unsharp masking.
Our formulation enjoys a filtering prior to a low-pass filter and enables explicit structure transfer by estimating a single coefficient.
arXiv Detail & Related papers (2021-06-02T19:15:34Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z) - Filter Pruning using Hierarchical Group Sparse Regularization for Deep
Convolutional Neural Networks [3.5636461829966093]
We propose a filter pruning method using the hierarchical group sparse regularization.
It can reduce more than 50% parameters of ResNet for CIFAR-10 with only 0.3% decrease in the accuracy of test samples.
Also, 34% parameters of ResNet are reduced for TinyImageNet-200 with higher accuracy than the baseline network.
arXiv Detail & Related papers (2020-11-04T16:29:41Z) - Deep Shells: Unsupervised Shape Correspondence with Optimal Transport [52.646396621449]
We propose a novel unsupervised learning approach to 3D shape correspondence.
We show that the proposed method significantly improves over the state-of-the-art on multiple datasets.
arXiv Detail & Related papers (2020-10-28T22:24:07Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.