Streaming Networks: Increase Noise Robustness and Filter Diversity via
Hard-wired and Input-induced Sparsity
- URL: http://arxiv.org/abs/2004.03334v2
- Date: Thu, 9 Apr 2020 03:50:07 GMT
- Title: Streaming Networks: Increase Noise Robustness and Filter Diversity via
Hard-wired and Input-induced Sparsity
- Authors: Sergey Tarasenko and Fumihiko Takahashi
- Abstract summary: Recent studies show that CNN's recognition accuracy drops drastically if images are noise corrupted.
We introduce a novel network architecture called Streaming Networks.
Results indicate that only the presence of both hard-wired and input-induces sparsity enables robust noisy image recognition.
- Score: 0.2538209532048866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The CNNs have achieved a state-of-the-art performance in many applications.
Recent studies illustrate that CNN's recognition accuracy drops drastically if
images are noise corrupted. We focus on the problem of robust recognition
accuracy of noise-corrupted images. We introduce a novel network architecture
called Streaming Networks. Each stream is taking a certain intensity slice of
the original image as an input, and stream parameters are trained
independently. We use network capacity, hard-wired and input-induced sparsity
as the dimensions for experiments. The results indicate that only the presence
of both hard-wired and input-induces sparsity enables robust noisy image
recognition. Streaming Nets is the only architecture which has both types of
sparsity and exhibits higher robustness to noise. Finally, to illustrate
increase in filter diversity we illustrate that a distribution of filter
weights of the first conv layer gradually approaches uniform distribution as
the degree of hard-wired and domain-induced sparsity and capacities increases.
Related papers
- TMFNet: Two-Stream Multi-Channels Fusion Networks for Color Image Operation Chain Detection [9.346492393908322]
We propose a novel two-stream multi-channels fusion network for color image operation chain detection.
The proposed method achieves state-of-the-art generalization ability while maintaining robustness to JPEG compression.
arXiv Detail & Related papers (2024-09-12T02:04:26Z) - Feature Attention Network (FA-Net): A Deep-Learning Based Approach for
Underwater Single Image Enhancement [0.8694819854201992]
We propose a deep learning and feature-attention-based end-to-end network (FA-Net) to solve this problem.
In particular, we propose a Residual Feature Attention Block (RFAB) containing the channel attention, pixel attention, and residual learning mechanism with long and short skip connections.
RFAB allows the network to focus on learning high-frequency information while skipping low-frequency information on multi-hop connections.
arXiv Detail & Related papers (2023-08-30T08:56:36Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - Multi-stage image denoising with the wavelet transform [125.2251438120701]
Deep convolutional neural networks (CNNs) are used for image denoising via automatically mining accurate structure information.
We propose a multi-stage image denoising CNN with the wavelet transform (MWDCNN) via three stages, i.e., a dynamic convolutional block (DCB), two cascaded wavelet transform and enhancement blocks (WEBs) and residual block (RB)
arXiv Detail & Related papers (2022-09-26T03:28:23Z) - Gabor is Enough: Interpretable Deep Denoising with a Gabor Synthesis
Dictionary Prior [6.297103076360578]
Gabor-like filters have been observed in the early layers of CNN classifiers and throughout low-level image processing networks.
In this work, we take this observation to the extreme and explicitly constrain the filters of a natural-image denoising CNN to be learned 2D real Gabor filters.
We find that the proposed network (GDLNet) can achieve near state-of-the-art denoising performance amongst popular fully convolutional neural networks.
arXiv Detail & Related papers (2022-04-23T22:21:54Z) - Convolutional Neural Network with Convolutional Block Attention Module
for Finger Vein Recognition [4.035753155957698]
We propose a lightweight convolutional neural network with a convolutional block attention module (CBAM) for finger vein recognition.
The experiments are carried out on two publicly available databases and the results demonstrate that the proposed method achieves a stable, highly accurate, and robust performance in multimodal finger recognition.
arXiv Detail & Related papers (2022-02-14T12:59:23Z) - Dynamic Slimmable Denoising Network [64.77565006158895]
Dynamic slimmable denoising network (DDSNet) is a general method to achieve good denoising quality with less computational complexity.
OurNet is empowered with the ability of dynamic inference by a dynamic gate.
Our experiments demonstrate our-Net consistently outperforms the state-of-the-art individually trained static denoising networks.
arXiv Detail & Related papers (2021-10-17T22:45:33Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z) - Identity Enhanced Residual Image Denoising [61.75610647978973]
We learn a fully-convolutional network model that consists of a Chain of Identity Mapping Modules and residual on the residual architecture for image denoising.
The proposed network produces remarkably higher numerical accuracy and better visual image quality than the classical state-of-the-art and CNN algorithms.
arXiv Detail & Related papers (2020-04-26T04:52:22Z) - A "Network Pruning Network" Approach to Deep Model Compression [62.68120664998911]
We present a filter pruning approach for deep model compression using a multitask network.
Our approach is based on learning a a pruner network to prune a pre-trained target network.
The compressed model produced by our approach is generic and does not need any special hardware/software support.
arXiv Detail & Related papers (2020-01-15T20:38:23Z) - ReluDiff: Differential Verification of Deep Neural Networks [8.601847909798165]
We develop a new method for differential verification of two closely related networks.
We exploit structural and behavioral similarities of the two networks to more accurately bound the difference between the output neurons of the two networks.
Our experiments show that, compared to state-of-the-art verification tools, our method can achieve orders-of-magnitude speedup.
arXiv Detail & Related papers (2020-01-10T20:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.