On the Performance of Convolutional Neural Networks under High and Low
Frequency Information
- URL: http://arxiv.org/abs/2011.06496v1
- Date: Fri, 30 Oct 2020 17:54:45 GMT
- Title: On the Performance of Convolutional Neural Networks under High and Low
Frequency Information
- Authors: Roshan Reddy Yedla and Shiv Ram Dubey
- Abstract summary: We study the performance of CNN models over the high and low frequency information of the images.
We propose the filtering based data augmentation during training.
A satisfactory performance improvement has been observed in terms of robustness and low frequency generalization.
- Score: 13.778851745408133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNNs) have shown very promising performance in
recent years for different problems, including object recognition, face
recognition, medical image analysis, etc. However, generally the trained CNN
models are tested over the test set which is very similar to the trained set.
The generalizability and robustness of the CNN models are very important
aspects to make it to work for the unseen data. In this letter, we study the
performance of CNN models over the high and low frequency information of the
images. We observe that the trained CNN fails to generalize over the high and
low frequency images. In order to make the CNN robust against high and low
frequency images, we propose the stochastic filtering based data augmentation
during training. A satisfactory performance improvement has been observed in
terms of the high and low frequency generalization and robustness with the
proposed stochastic filtering based data augmentation approach. The
experimentations are performed using ResNet50 model over the CIFAR-10 dataset
and ResNet101 model over Tiny-ImageNet dataset.
Related papers
- Modeling & Evaluating the Performance of Convolutional Neural Networks for Classifying Steel Surface Defects [0.0]
Recently, outstanding identification rates in image classification tasks were achieved by convolutional neural networks (CNNs)
DenseNet201 had the greatest detection rate on the NEU dataset, falling in at 98.37 percent.
arXiv Detail & Related papers (2024-06-19T08:14:50Z) - InternImage: Exploring Large-Scale Vision Foundation Models with
Deformable Convolutions [95.94629864981091]
This work presents a new large-scale CNN-based foundation model, termed InternImage, which can obtain the gain from increasing parameters and training data like ViTs.
The proposed InternImage reduces the strict inductive bias of traditional CNNs and makes it possible to learn stronger and more robust patterns with large-scale parameters from massive data like ViTs.
arXiv Detail & Related papers (2022-11-10T18:59:04Z) - Decoupled Mixup for Generalized Visual Recognition [71.13734761715472]
We propose a novel "Decoupled-Mixup" method to train CNN models for visual recognition.
Our method decouples each image into discriminative and noise-prone regions, and then heterogeneously combines these regions to train CNN models.
Experiment results show the high generalization performance of our method on testing data that are composed of unseen contexts.
arXiv Detail & Related papers (2022-10-26T15:21:39Z) - Lost Vibration Test Data Recovery Using Convolutional Neural Network: A
Case Study [0.0]
This paper proposes a CNN algorithm for the Alamosa Canyon Bridge as a real structure.
Three different CNN models were considered to predict one and two malfunctioned sensors.
The accuracy of the model was increased by adding a convolutional layer.
arXiv Detail & Related papers (2022-04-11T23:24:03Z) - New SAR target recognition based on YOLO and very deep multi-canonical
correlation analysis [0.1503974529275767]
This paper proposes a robust feature extraction method for SAR image target classification by adaptively fusing effective features from different CNN layers.
Experiments on the MSTAR dataset demonstrate that the proposed method outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-10-28T18:10:26Z) - Receptive Field Regularization Techniques for Audio Classification and
Tagging with Deep Convolutional Neural Networks [7.9495796547433395]
We show that tuning the Receptive Field (RF) of CNNs is crucial to their generalization.
We propose several systematic approaches to control the RF of CNNs and systematically test the resulting architectures.
arXiv Detail & Related papers (2021-05-26T08:36:29Z) - Examining and Mitigating Kernel Saturation in Convolutional Neural
Networks using Negative Images [0.8594140167290097]
We analyze the effect of convolutional kernel saturation in CNNs.
We propose a simple data augmentation technique to mitigate saturation and increase classification accuracy, by supplementing negative images to the training dataset.
Our results show that CNNs are indeed susceptible to convolutional kernel saturation and that supplementing negative images to the training dataset can offer a statistically significant increase in classification accuracies.
arXiv Detail & Related papers (2021-05-10T06:06:49Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Deep learning for gravitational-wave data analysis: A resampling
white-box approach [62.997667081978825]
We apply Convolutional Neural Networks (CNNs) to detect gravitational wave (GW) signals of compact binary coalescences, using single-interferometer data from LIGO detectors.
CNNs were quite precise to detect noise but not sensitive enough to recall GW signals, meaning that CNNs are better for noise reduction than generation of GW triggers.
arXiv Detail & Related papers (2020-09-09T03:28:57Z) - Exploring Deep Hybrid Tensor-to-Vector Network Architectures for
Regression Based Speech Enhancement [53.47564132861866]
We find that a hybrid architecture, namely CNN-TT, is capable of maintaining a good quality performance with a reduced model parameter size.
CNN-TT is composed of several convolutional layers at the bottom for feature extraction to improve speech quality.
arXiv Detail & Related papers (2020-07-25T22:21:05Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.