The Neural Tangent Link Between CNN Denoisers and Non-Local Filters
- URL: http://arxiv.org/abs/2006.02379v4
- Date: Mon, 16 Nov 2020 23:06:53 GMT
- Title: The Neural Tangent Link Between CNN Denoisers and Non-Local Filters
- Authors: Juli\'an Tachella and Junqi Tang and Mike Davies
- Abstract summary: Convolutional Neural Networks (CNNs) are now a well-established tool for solving computational imaging problems.
We introduce a formal link between such networks through their neural kernel tangent (NTK) and well-known non-local filtering techniques.
We evaluate our findings via extensive image denoising experiments.
- Score: 4.254099382808598
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional Neural Networks (CNNs) are now a well-established tool for
solving computational imaging problems. Modern CNN-based algorithms obtain
state-of-the-art performance in diverse image restoration problems.
Furthermore, it has been recently shown that, despite being highly
overparameterized, networks trained with a single corrupted image can still
perform as well as fully trained networks. We introduce a formal link between
such networks through their neural tangent kernel (NTK), and well-known
non-local filtering techniques, such as non-local means or BM3D. The filtering
function associated with a given network architecture can be obtained in closed
form without need to train the network, being fully characterized by the random
initialization of the network weights. While the NTK theory accurately predicts
the filter associated with networks trained using standard gradient descent,
our analysis shows that it falls short to explain the behaviour of networks
trained using the popular Adam optimizer. The latter achieves a larger change
of weights in hidden layers, adapting the non-local filtering function during
training. We evaluate our findings via extensive image denoising experiments.
Related papers
- Training Convolutional Neural Networks with the Forward-Forward
algorithm [1.74440662023704]
Forward Forward (FF) algorithm has up to now only been used in fully connected networks.
We show how the FF paradigm can be extended to CNNs.
Our FF-trained CNN, featuring a novel spatially-extended labeling technique, achieves a classification accuracy of 99.16% on the MNIST hand-written digits dataset.
arXiv Detail & Related papers (2023-12-22T18:56:35Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Improving the Trainability of Deep Neural Networks through Layerwise
Batch-Entropy Regularization [1.3999481573773072]
We introduce and evaluate the batch-entropy which quantifies the flow of information through each layer of a neural network.
We show that we can train a "vanilla" fully connected network and convolutional neural network with 500 layers by simply adding the batch-entropy regularization term to the loss function.
arXiv Detail & Related papers (2022-08-01T20:31:58Z) - SAR Despeckling Using Overcomplete Convolutional Networks [53.99620005035804]
despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
arXiv Detail & Related papers (2022-05-31T15:55:37Z) - Gabor is Enough: Interpretable Deep Denoising with a Gabor Synthesis
Dictionary Prior [6.297103076360578]
Gabor-like filters have been observed in the early layers of CNN classifiers and throughout low-level image processing networks.
In this work, we take this observation to the extreme and explicitly constrain the filters of a natural-image denoising CNN to be learned 2D real Gabor filters.
We find that the proposed network (GDLNet) can achieve near state-of-the-art denoising performance amongst popular fully convolutional neural networks.
arXiv Detail & Related papers (2022-04-23T22:21:54Z) - Background Invariant Classification on Infrared Imagery by Data
Efficient Training and Reducing Bias in CNNs [1.2891210250935146]
convolutional neural networks can classify objects in images very accurately.
It is well known that the attention of the network may not always be on the semantically important regions of the scene.
We propose a new two-step training procedure called textitsplit training to reduce this bias in CNNs on both Infrared imagery and RGB data.
arXiv Detail & Related papers (2022-01-22T23:29:42Z) - Implementing a foveal-pit inspired filter in a Spiking Convolutional
Neural Network: a preliminary study [0.0]
We have presented a Spiking Convolutional Neural Network (SCNN) that incorporates retinal foveal-pit inspired Difference of Gaussian filters and rank-order encoding.
The model is trained using a variant of the backpropagation algorithm adapted to work with spiking neurons, as implemented in the Nengo library.
The network has achieved up to 90% accuracy, where loss is calculated using the cross-entropy function.
arXiv Detail & Related papers (2021-05-29T15:28:30Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z) - Computational optimization of convolutional neural networks using
separated filters architecture [69.73393478582027]
We consider a convolutional neural network transformation that reduces computation complexity and thus speedups neural network processing.
Use of convolutional neural networks (CNN) is the standard approach to image recognition despite the fact they can be too computationally demanding.
arXiv Detail & Related papers (2020-02-18T17:42:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.