Sparsifying and Down-scaling Networks to Increase Robustness to
Distortions
- URL: http://arxiv.org/abs/2006.11389v1
- Date: Mon, 8 Jun 2020 03:58:27 GMT
- Title: Sparsifying and Down-scaling Networks to Increase Robustness to
Distortions
- Authors: Sergey Tarasenko
- Abstract summary: Streaming Network (STNet) is a novel architecture capable of robust classification of distorted images.
Recent results prove STNet is robust to 20 types of noise and distortions.
New STNets exhibit higher or equal accuracy in comparison with original networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It has been shown that perfectly trained networks exhibit drastic reduction
in performance when presented with distorted images. Streaming Network (STNet)
is a novel architecture capable of robust classification of the distorted
images while been trained on undistorted images. The distortion robustness is
enabled by means of sparse input and isolated parallel streams with decoupled
weights. Recent results prove STNet is robust to 20 types of noise and
distortions. STNet exhibits state-of-the-art performance for classification of
low light images, while being of much smaller size when other networks. In this
paper, we construct STNets by using scaled versions (number of filters in each
layer is reduced by factor of n) of popular networks like VGG16, ResNet50 and
MobileNetV2 as parallel streams. These new STNets are tested on several
datasets. Our results indicate that more efficient (less FLOPS), new STNets
exhibit higher or equal accuracy in comparison with original networks.
Considering a diversity of datasets and networks used for tests, we conclude
that a new type of STNets is an efficient tool for robust classification of
distorted images.
Related papers
- ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object [78.58860252442045]
We introduce generative model as a data source for hard images that benchmark deep models' robustness.
We are able to generate images with more diversified backgrounds, textures, and materials than any prior work, where we term this benchmark as ImageNet-D.
Our work suggests that diffusion models can be an effective source to test vision models.
arXiv Detail & Related papers (2024-03-27T17:23:39Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - Inverse Image Frequency for Long-tailed Image Recognition [59.40098825416675]
We propose a novel de-biasing method named Inverse Image Frequency (IIF)
IIF is a multiplicative margin adjustment transformation of the logits in the classification layer of a convolutional neural network.
Our experiments show that IIF surpasses the state of the art on many long-tailed benchmarks.
arXiv Detail & Related papers (2022-09-11T13:31:43Z) - Impact of Scaled Image on Robustness of Deep Neural Networks [0.0]
Scaling the raw images creates out-of-distribution data, which makes it a possible adversarial attack to fool the networks.
In this work, we propose a Scaling-distortion dataset ImageNet-CS by Scaling a subset of the ImageNet Challenge dataset by different multiples.
arXiv Detail & Related papers (2022-09-02T08:06:58Z) - Connection Reduction Is All You Need [0.10878040851637998]
Empirical research shows that simply stacking convolutional layers does not make the network train better.
We propose two new algorithms to connect layers.
ShortNet1 has a 5% lower test error rate and 25% faster inference time than Baseline.
arXiv Detail & Related papers (2022-08-02T13:00:35Z) - Frequency Disentangled Residual Network [11.388328269522006]
Residual networks (ResNets) have been utilized for various computer vision and image processing applications.
A residual block consists of few convolutional layers having trainable parameters, which leads to overfitting.
A frequency disentangled residual network (FDResNet) is proposed to tackle these issues.
arXiv Detail & Related papers (2021-09-26T10:52:18Z) - DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and
Transformers [105.74546828182834]
We show a hardware-efficient dynamic inference regime, named dynamic weight slicing, which adaptively slice a part of network parameters for inputs with diverse difficulty levels.
We present dynamic slimmable network (DS-Net) and dynamic slice-able network (DS-Net++) by input-dependently adjusting filter numbers of CNNs and multiple dimensions in both CNNs and transformers.
arXiv Detail & Related papers (2021-09-21T09:57:21Z) - Understanding Robustness of Transformers for Image Classification [34.51672491103555]
Vision Transformer (ViT) has surpassed ResNets for image classification.
Details of the Transformer architecture lead one to wonder whether these networks are as robust.
We find that ViT models are at least as robust as the ResNet counterparts on a broad range of perturbations.
arXiv Detail & Related papers (2021-03-26T16:47:55Z) - Scalable Visual Transformers with Hierarchical Pooling [61.05787583247392]
We propose a Hierarchical Visual Transformer (HVT) which progressively pools visual tokens to shrink the sequence length.
It brings a great benefit by scaling dimensions of depth/width/resolution/patch size without introducing extra computational complexity.
Our HVT outperforms the competitive baselines on ImageNet and CIFAR-100 datasets.
arXiv Detail & Related papers (2021-03-19T03:55:58Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Applications of the Streaming Networks [0.2538209532048866]
Streaming Networks (STnets) have been introduced as a mechanism of robust noise-corrupted images classification.
In this paper, we demonstrate that STnets are capable of high accuracy classification of images corrupted with noise.
We also introduce a new type of STnets called Hybrid STnets.
arXiv Detail & Related papers (2020-03-27T08:13:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.