Self-Organized Operational Neural Networks for Severe Image Restoration
Problems
- URL: http://arxiv.org/abs/2008.12894v1
- Date: Sat, 29 Aug 2020 02:19:41 GMT
- Title: Self-Organized Operational Neural Networks for Severe Image Restoration
Problems
- Authors: Junaid Malik, Serkan Kiranyaz, Moncef Gabbouj
- Abstract summary: Discnative learning based on convolutional neural networks (CNNs) aims to perform image restoration by learning from training examples of noisy-clean image pairs.
We claim that this is due to the inherent linear nature of convolution-based transformation, which is inadequate for handling severe restoration problems.
We propose a self-organizing variant of ONNs, Self-ONNs, for image restoration, which synthesizes novel nodal transformations onthe-fly.
- Score: 25.838282412957675
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Discriminative learning based on convolutional neural networks (CNNs) aims to
perform image restoration by learning from training examples of noisy-clean
image pairs. It has become the go-to methodology for tackling image restoration
and has outperformed the traditional non-local class of methods. However, the
top-performing networks are generally composed of many convolutional layers and
hundreds of neurons, with trainable parameters in excess of several millions.
We claim that this is due to the inherent linear nature of convolution-based
transformation, which is inadequate for handling severe restoration problems.
Recently, a non-linear generalization of CNNs, called the operational neural
networks (ONN), has been shown to outperform CNN on AWGN denoising. However,
its formulation is burdened by a fixed collection of well-known nonlinear
operators and an exhaustive search to find the best possible configuration for
a given architecture, whose efficacy is further limited by a fixed output layer
operator assignment. In this study, we leverage the Taylor series-based
function approximation to propose a self-organizing variant of ONNs, Self-ONNs,
for image restoration, which synthesizes novel nodal transformations onthe-fly
as part of the learning process, thus eliminating the need for redundant
training runs for operator search. In addition, it enables a finer level of
operator heterogeneity by diversifying individual connections of the receptive
fields and weights. We perform a series of extensive ablation experiments
across three severe image restoration tasks. Even when a strict equivalence of
learnable parameters is imposed, Self-ONNs surpass CNNs by a considerable
margin across all problems, improving the generalization performance by up to 3
dB in terms of PSNR.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - A Neural-Network-Based Convex Regularizer for Inverse Problems [14.571246114579468]
Deep-learning methods to solve image-reconstruction problems have enabled a significant increase in reconstruction quality.
These new methods often lack reliability and explainability, and there is a growing interest to address these shortcomings.
In this work, we tackle this issue by revisiting regularizers that are the sum of convex-ridge functions.
The gradient of such regularizers is parameterized by a neural network that has a single hidden layer with increasing and learnable activation functions.
arXiv Detail & Related papers (2022-11-22T18:19:10Z) - Random Weight Factorization Improves the Training of Continuous Neural
Representations [1.911678487931003]
Continuous neural representations have emerged as a powerful and flexible alternative to classical discretized representations of signals.
We propose random weight factorization as a simple drop-in replacement for parameterizing and initializing conventional linear layers.
We show how this factorization alters the underlying loss landscape and effectively enables each neuron in the network to learn using its own self-adaptive learning rate.
arXiv Detail & Related papers (2022-10-03T23:48:48Z) - Convolutional versus Self-Organized Operational Neural Networks for
Real-World Blind Image Denoising [25.31981236136533]
We tackle the real-world blind image denoising problem by employing, for the first time, a deep Self-ONN.
Deep Self-ONNs consistently achieve superior results with performance gains of up to 1.76dB in PSNR.
arXiv Detail & Related papers (2021-03-04T14:49:17Z) - An End-To-End-Trainable Iterative Network Architecture for Accelerated
Radial Multi-Coil 2D Cine MR Image Reconstruction [4.233498905999929]
We propose a CNN-architecture for image reconstruction of accelerated 2D radial cine MRI with multiple receiver coils.
We investigate the proposed training-strategy and compare our method to other well-known reconstruction techniques with learned and non-learned regularization methods.
arXiv Detail & Related papers (2021-02-01T11:42:04Z) - A Fully Tensorized Recurrent Neural Network [48.50376453324581]
We introduce a "fully tensorized" RNN architecture which jointly encodes the separate weight matrices within each recurrent cell.
This approach reduces model size by several orders of magnitude, while still maintaining similar or better performance compared to standard RNNs.
arXiv Detail & Related papers (2020-10-08T18:24:12Z) - Operational vs Convolutional Neural Networks for Image Denoising [25.838282412957675]
Convolutional Neural Networks (CNNs) have recently become a favored technique for image denoising due to its adaptive learning ability.
We propose a heterogeneous network model which allows greater flexibility for embedding additional non-linearity at the core of the data transformation.
An extensive set of comparative evaluations of ONNs and CNNs over two severe image denoising problems yield conclusive evidence that ONNs enriched by non-linear operators can achieve a superior denoising performance against CNNs with both equivalent and well-known deep configurations.
arXiv Detail & Related papers (2020-09-01T12:15:28Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.