DeepRLS: A Recurrent Network Architecture with Least Squares Implicit
Layers for Non-blind Image Deconvolution
- URL: http://arxiv.org/abs/2112.05505v1
- Date: Fri, 10 Dec 2021 13:16:51 GMT
- Title: DeepRLS: A Recurrent Network Architecture with Least Squares Implicit
Layers for Non-blind Image Deconvolution
- Authors: Iaroslav Koshelev, Daniil Selikhanovych and Stamatios Lefkimmiatis
- Abstract summary: We study the problem of non-blind image deconvolution.
We propose a novel recurrent network architecture that leads to very competitive restoration results of high image quality.
- Score: 15.986942312624
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this work, we study the problem of non-blind image deconvolution and
propose a novel recurrent network architecture that leads to very competitive
restoration results of high image quality. Motivated by the computational
efficiency and robustness of existing large scale linear solvers, we manage to
express the solution to this problem as the solution of a series of adaptive
non-negative least-squares problems. This gives rise to our proposed Recurrent
Least Squares Deconvolution Network (RLSDN) architecture, which consists of an
implicit layer that imposes a linear constraint between its input and output.
By design, our network manages to serve two important purposes simultaneously.
The first is that it implicitly models an effective image prior that can
adequately characterize the set of natural images, while the second is that it
recovers the corresponding maximum a posteriori (MAP) estimate. Experiments on
publicly available datasets, comparing recent state-of-the-art methods, show
that our proposed RLSDN approach achieves the best reported performance both
for grayscale and color images for all tested scenarios. Furthermore, we
introduce a novel training strategy that can be adopted by any network
architecture that involves the solution of linear systems as part of its
pipeline. Our strategy eliminates completely the need to unroll the iterations
required by the linear solver and, thus, it reduces significantly the memory
footprint during training. Consequently, this enables the training of deeper
network architectures which can further improve the reconstruction results.
Related papers
- Towards Architecture-Agnostic Untrained Network Priors for Image Reconstruction with Frequency Regularization [14.73423587548693]
We propose efficient architecture-agnostic techniques to directly modulate the spectral bias of network priors.
We show that, with just a few lines of code, we can reduce overfitting in underperforming architectures and close performance gaps with high-performing counterparts.
Results signify for the first time that architectural biases, overfitting, and runtime issues of untrained network priors can be simultaneously addressed without architectural modifications.
arXiv Detail & Related papers (2023-12-15T18:01:47Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Self-Supervised Coordinate Projection Network for Sparse-View Computed
Tomography [31.774432128324385]
We propose a Self-supervised COordinate Projection nEtwork (SCOPE) to reconstruct the artifacts-free CT image from a single SV sinogram.
Compared with recent related works that solve similar problems using implicit neural representation network (INR), our essential contribution is an effective and simple re-projection strategy.
arXiv Detail & Related papers (2022-09-12T06:14:04Z) - Deep Amended Gradient Descent for Efficient Spectral Reconstruction from
Single RGB Images [42.26124628784883]
We propose a compact, efficient, and end-to-end learning-based framework, namely AGD-Net.
We first formulate the problem explicitly based on the classic gradient descent algorithm.
AGD-Net can improve the reconstruction quality by more than 1.0 dB on average.
arXiv Detail & Related papers (2021-08-12T05:54:09Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.