Effects of Degradations on Deep Neural Network Architectures
- URL: http://arxiv.org/abs/1807.10108v5
- Date: Wed, 29 Mar 2023 16:48:45 GMT
- Title: Effects of Degradations on Deep Neural Network Architectures
- Authors: Prasun Roy, Subhankar Ghosh, Saumik Bhattacharya, Umapada Pal
- Abstract summary: Deep convolutional neural networks (CNN) have influenced recent advances in large-scale image classification.
The behavior of such networks in the presence of a degrading signal (noise) is mostly unexplored.
This paper presents an extensive performance analysis of six deep architectures for image classification on six most common image degradation models.
- Score: 18.79337509555511
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep convolutional neural networks (CNN) have massively influenced recent
advances in large-scale image classification. More recently, a dynamic routing
algorithm with capsules (groups of neurons) has shown state-of-the-art
recognition performance. However, the behavior of such networks in the presence
of a degrading signal (noise) is mostly unexplored. An analytical study on
different network architectures toward noise robustness is essential for
selecting the appropriate model in a specific application scenario. This paper
presents an extensive performance analysis of six deep architectures for image
classification on six most common image degradation models. In this study, we
have compared VGG-16, VGG-19, ResNet-50, Inception-v3, MobileNet and CapsuleNet
architectures on Gaussian white, Gaussian color, salt-and-pepper, Gaussian
blur, motion blur and JPEG compression noise models.
Related papers
- Efficient and Accurate Hyperspectral Image Demosaicing with Neural Network Architectures [3.386560551295746]
This study investigates the effectiveness of neural network architectures in hyperspectral image demosaicing.
We introduce a range of network models and modifications, and compare them with classical methods and existing reference network approaches.
Results indicate that our networks outperform or match reference models in both datasets demonstrating exceptional performance.
arXiv Detail & Related papers (2023-12-21T08:02:49Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Impact of Scaled Image on Robustness of Deep Neural Networks [0.0]
Scaling the raw images creates out-of-distribution data, which makes it a possible adversarial attack to fool the networks.
In this work, we propose a Scaling-distortion dataset ImageNet-CS by Scaling a subset of the ImageNet Challenge dataset by different multiples.
arXiv Detail & Related papers (2022-09-02T08:06:58Z) - SmoothNets: Optimizing CNN architecture design for differentially
private deep learning [69.10072367807095]
DPSGD requires clipping and noising of per-sample gradients.
This introduces a reduction in model utility compared to non-private training.
We distilled a new model architecture termed SmoothNet, which is characterised by increased robustness to the challenges of DP-SGD training.
arXiv Detail & Related papers (2022-05-09T07:51:54Z) - Practical Blind Image Denoising via Swin-Conv-UNet and Data Synthesis [148.16279746287452]
We propose a swin-conv block to incorporate the local modeling ability of residual convolutional layer and non-local modeling ability of swin transformer block.
For the training data synthesis, we design a practical noise degradation model which takes into consideration different kinds of noise.
Experiments on AGWN removal and real image denoising demonstrate that the new network architecture design achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-03-24T18:11:31Z) - Self-Denoising Neural Networks for Few Shot Learning [66.38505903102373]
We present a new training scheme that adds noise at multiple stages of an existing neural architecture while simultaneously learning to be robust to this added noise.
This architecture, which we call a Self-Denoising Neural Network (SDNN), can be applied easily to most modern convolutional neural architectures.
arXiv Detail & Related papers (2021-10-26T03:28:36Z) - Dynamic Proximal Unrolling Network for Compressive Sensing Imaging [29.00266254916676]
We present a dynamic proximal unrolling network (dubbed DPUNet), which can handle a variety of measurement matrices via one single model without retraining.
Specifically, DPUNet can exploit both embedded physical model via gradient descent and imposing image prior with learned dynamic proximal mapping.
Experimental results demonstrate that the proposed DPUNet can effectively handle multiple CSI modalities under varying sampling ratios and noise levels with only one model.
arXiv Detail & Related papers (2021-07-23T03:04:44Z) - Perceptually Optimizing Deep Image Compression [53.705543593594285]
Mean squared error (MSE) and $ell_p$ norms have largely dominated the measurement of loss in neural networks.
We propose a different proxy approach to optimize image analysis networks against quantitative perceptual models.
arXiv Detail & Related papers (2020-07-03T14:33:28Z) - Impact of ImageNet Model Selection on Domain Adaptation [26.016647703500883]
We investigate how different ImageNet models affect transfer accuracy on domain adaptation problems.
A higher accuracy ImageNet model produces better features, and leads to higher accuracy on domain adaptation problems.
We also examine the architecture of each neural network to find the best layer for feature extraction.
arXiv Detail & Related papers (2020-02-06T23:58:23Z) - Inferring Convolutional Neural Networks' accuracies from their
architectural characterizations [0.0]
We study the relationships between a CNN's architecture and its performance.
We show that the attributes can be predictive of the networks' performance in two specific computer vision-based physics problems.
We use machine learning models to predict whether a network can perform better than a certain threshold accuracy before training.
arXiv Detail & Related papers (2020-01-07T16:41:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.