Image denoising by Super Neurons: Why go deep?
- URL: http://arxiv.org/abs/2111.14948v1
- Date: Mon, 29 Nov 2021 20:52:10 GMT
- Title: Image denoising by Super Neurons: Why go deep?
- Authors: Junaid Malik, Serkan Kiranyaz, Moncef Gabbouj
- Abstract summary: We investigate the use of super neurons for both synthetic and real-world image denoising.
Our results demonstrate that with the same width and depth, Self-ONNs with super neurons provide a significant boost of denoising performance.
- Score: 31.087153520130112
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Classical image denoising methods utilize the non-local self-similarity
principle to effectively recover image content from noisy images. Current
state-of-the-art methods use deep convolutional neural networks (CNNs) to
effectively learn the mapping from noisy to clean images. Deep denoising CNNs
manifest a high learning capacity and integrate non-local information owing to
the large receptive field yielded by numerous cascade of hidden layers.
However, deep networks are also computationally complex and require large data
for training. To address these issues, this study draws the focus on the
Self-organized Operational Neural Networks (Self-ONNs) empowered by a novel
neuron model that can achieve a similar or better denoising performance with a
compact and shallow model. Recently, the concept of super-neurons has been
introduced which augment the non-linear transformations of generative neurons
by utilizing non-localized kernel locations for an enhanced receptive field
size. This is the key accomplishment which renders the need for a deep network
configuration. As the integration of non-local information is known to benefit
denoising, in this work we investigate the use of super neurons for both
synthetic and real-world image denoising. We also discuss the practical issues
in implementing the super neuron model on GPUs and propose a trade-off between
the heterogeneity of non-localized operations and computational complexity. Our
results demonstrate that with the same width and depth, Self-ONNs with super
neurons provide a significant boost of denoising performance over the networks
with generative and convolutional neurons for both denoising tasks. Moreover,
results demonstrate that Self-ONNs with super neurons can achieve a competitive
and superior synthetic denoising performances than well-known deep CNN
denoisers for synthetic and real-world denoising, respectively.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Neural information coding for efficient spike-based image denoising [0.5156484100374058]
In this work we investigate Spiking Neural Networks (SNNs) for Gaussian denoising.
We propose a formal analysis of the information conversion processing carried out by the Leaky Integrate and Fire (LIF) neurons.
We compare its performance with the classical rate-coding mechanism.
Our results show that SNNs with LIF neurons can provide competitive denoising performance but at a reduced computational cost.
arXiv Detail & Related papers (2023-05-15T09:05:32Z) - Multi-stage image denoising with the wavelet transform [125.2251438120701]
Deep convolutional neural networks (CNNs) are used for image denoising via automatically mining accurate structure information.
We propose a multi-stage image denoising CNN with the wavelet transform (MWDCNN) via three stages, i.e., a dynamic convolutional block (DCB), two cascaded wavelet transform and enhancement blocks (WEBs) and residual block (RB)
arXiv Detail & Related papers (2022-09-26T03:28:23Z) - Dense-Sparse Deep Convolutional Neural Networks Training for Image Denoising [0.6215404942415159]
Deep learning methods such as the convolutional neural networks have gained prominence in the area of image denoising.
Deep denoising convolutional neural networks use many feed-forward convolution layers with added regularization methods of batch normalization and residual learning to speed up training and improve denoising performance significantly.
In this paper, we show that by employing an enhanced dense-sparse-dense network training procedure to the deep denoising convolutional neural networks, comparable denoising performance level can be achieved at a significantly reduced number of trainable parameters.
arXiv Detail & Related papers (2021-07-10T15:14:19Z) - Convolutional versus Self-Organized Operational Neural Networks for
Real-World Blind Image Denoising [25.31981236136533]
We tackle the real-world blind image denoising problem by employing, for the first time, a deep Self-ONN.
Deep Self-ONNs consistently achieve superior results with performance gains of up to 1.76dB in PSNR.
arXiv Detail & Related papers (2021-03-04T14:49:17Z) - Information contraction in noisy binary neural networks and its
implications [11.742803725197506]
We consider noisy binary neural networks, where each neuron has a non-zero probability of producing an incorrect output.
Our key finding is a lower bound for the required number of neurons in noisy neural networks, which is first of its kind.
This paper offers new understanding of noisy information processing systems through the lens of information theory.
arXiv Detail & Related papers (2021-01-28T00:01:45Z) - Operational vs Convolutional Neural Networks for Image Denoising [25.838282412957675]
Convolutional Neural Networks (CNNs) have recently become a favored technique for image denoising due to its adaptive learning ability.
We propose a heterogeneous network model which allows greater flexibility for embedding additional non-linearity at the core of the data transformation.
An extensive set of comparative evaluations of ONNs and CNNs over two severe image denoising problems yield conclusive evidence that ONNs enriched by non-linear operators can achieve a superior denoising performance against CNNs with both equivalent and well-known deep configurations.
arXiv Detail & Related papers (2020-09-01T12:15:28Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z) - Deep Learning on Image Denoising: An overview [92.07378559622889]
We offer a comparative study of deep techniques in image denoising.
We first classify the deep convolutional neural networks (CNNs) for additive white noisy images.
Next, we compare the state-of-the-art methods on public denoising datasets in terms of quantitative and qualitative analysis.
arXiv Detail & Related papers (2019-12-31T05:03:57Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.